1 <?xml version='1.0' encoding='UTF-8'?>
\r
2 <chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
\r
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:id="migratingdata" >
\r
5 <title>Migrating Data</title>
\r
7 <para>Migrating data into Evergreen can be one of the most daunting tasks for an administrator. This chapter will explain some procedures to help to migrate
\r
8 bibliographic records, copies and patrons into the Evergreen system. This chapter requires advanced ILS Administration experience, knowledge of Evergreen data structures,
\r
9 as well as knowledge of how to export data from your current system or access to data export files from your current system.</para>
\r
12 <section xml:id="migratingbibrecords">
\r
13 <title>Migrating Bibliographic Records</title>
\r
14 <indexterm><primary>migrating</primary><secondary>importing bibliographic records</secondary></indexterm>
\r
16 One of the most important and challenging tasks is migrating your bibliographic records to a new system. The procedure may be different depending on the system from which you
\r
17 are migrating and the content of the marc records exported from the existing system. The procedures in this section deal with the process once the data from the existing system
\r
18 is exported into marc records. It does not cover exporting data from your existing non-Evergreen system.</para>
\r
19 <para>Several tools for importing bibliographic records into Evergreen can be found in the Evergreen installation folder
\r
20 (<filename class="directory">/home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/src/extras/import/</filename> ) and are also available from the Evergreen repository
\r
21 (<link xl:href="http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import" xl:title="import scripts - Evergreen repository">
\r
22 http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import</link>).</para>
\r
24 <title>Converting MARC records to Evergreen BRE JSON format</title>
\r
25 <indexterm><primary>BRE JSON</primary></indexterm>
\r
26 <para>If you are starting with MARC records from your existing system or another source, use the marc2bre.pl script to create the JSON representation of a bibliographic
\r
27 record entry (hence bre) in Evergreen. <filename>marc2bre.pl</filename> can perform the following functions:</para>
\r
29 <listitem><para>Converts <systemitem>MARC-8</systemitem> encoded records to <systemitem>UTF-8</systemitem> encoding</para></listitem>
\r
30 <listitem><para>Converts <systemitem>MARC21</systemitem> to <systemitem>MARCXML21</systemitem></para></listitem>
\r
31 <listitem><para>Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field
\r
32 actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)</para></listitem>
\r
33 <listitem><para>Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)</para></listitem>
\r
34 <listitem><para>Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following
\r
35 <systemitem>SQL</systemitem> to determine what this number should be to avoid conflicts:</para>
\r
37 <userinput>psql -U postgres evergreen</userinput>
\r
38 <userinput> # SELECT MAX(id)+1 FROM biblio.record_entry;</userinput>
\r
42 <para>If you are processing multiple sets of MARC records with <command>marc2bre.pl</command>before loading the records into the database, you will need to keep track
\r
43 of the starting ID number for each subsequent batch of records that you are importing. For example, if you are processing three files of MARC records with 10000
\r
44 records each into a clean database, you would use <option>–startid 1</option>, <option>–startid 10001</option>, and <option>–startid 20001</option>
\r
45 parameters for each respective file.</para>
\r
48 <para>Ignore <quote>trash</quote> fields that you do not want to retain in Evergreen</para>
\r
51 <para>If you use <filename>marc2bre.pl</filename> to convert your MARC records from the <systemitem>MARC-8</systemitem> encoding to the UTF-8 encoding, it relies
\r
52 on the <systemitem>MARC::Charset Perl</systemitem> module to complete the conversion. When importing a large set of items, you can speed up the process by using a
\r
53 utility like <systemitem>marc4j</systemitem> or <systemitem>marcdumper</systemitem> to convert the records
\r
54 to <systemitem>MARC21XML</systemitem> and <systemitem>UTF-8</systemitem> before running them through <command>marc2bre.pl</command> with the
\r
55 <option>–marctype=XML</option> flag to tell <command>marc2bre.pl</command> that the records are already in <systemitem>MARC21XML</systemitem> format with
\r
56 the <systemitem>UTF-8 </systemitem>encoding. If you take this approach, due to a current limitation of <systemitem>MARC::File::XML</systemitem> you have to do a
\r
57 horrible thing and ensure that there are no namespace prefixes in front of the element names. <filename>marc2bre.pl</filename> cannot parse the following
\r
60 <programlisting language="xml">
\r
63 <?xml version="1.0" encoding="UTF-8" ?>
\r
64 <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim"
\r
65 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
\r
66 xsi:schemaLocation="http://www.loc.gov/MARC/slim
\r
67 http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
\r
69 <marc:leader>00677nam a2200193 a 4500</marc:leader>
\r
70 <marc:controlfield tag="001">H01-0000844</marc:controlfield>
\r
71 <marc:controlfield tag="007">t </marc:controlfield>
\r
72 <marc:controlfield tag="008">060420s1950 xx 000 u fre d</marc:controlfield>
\r
73 <marc:datafield tag="040" ind1=" " ind2=" ">
\r
74 <marc:subfield code="a">CaOHCU</marc:subfield>
\r
75 <marc:subfield code="b">fre</marc:subfield>
\r
81 <para>But marc2bre.pl can parse the same example with the namespace prefixes removed:</para>
\r
82 <programlisting language="xml">
\r
84 <?xml version="1.0" encoding="UTF-8" ?>
\r
85 <collection xmlns:marc="http://www.loc.gov/MARC21/slim"
\r
86 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
\r
87 xsi:schemaLocation="http://www.loc.gov/MARC/slim
\r
88 http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
\r
90 <leader>00677nam a2200193 a 4500</leader>
\r
91 <controlfield tag="001">H01-0000844</controlfield>
\r
92 <controlfield tag="007">t </controlfield>
\r
93 <controlfield tag="008">060420s1950 xx 000 u fre d</controlfield>
\r
94 <datafield tag="040" ind1=" " ind2=" ">
\r
95 <subfield code="a">CaOHCU</subfield>
\r
96 <subfield code="b">fre</subfield>
\r
105 <title>Converting Records for Import into PostgreSQL</title>
\r
106 <indexterm><primary>migrating</primary><secondary>import into PostgreSQL</secondary></indexterm>
\r
107 <para>Once you have your records in Evergreen's <systemitem>BRE JSON</systemitem> format, you then need to use <command>direct_ingest.pl</command> to convert the records
\r
108 into the generic ingest <systemitem>JSON</systemitem> format for Open-ILS.
\r
109 This step uses the <systemitem>open-ils.ingest</systemitem> application to extract the data that will be indexed in the database. </para>
\r
110 <para>Once you have your records in Open-ILS <systemitem>JSON</systemitem> ingest format, you then need to use <command>pg_loader.pl</command> to convert these records into a
\r
111 set of <systemitem>SQL</systemitem> statements that you can use to
\r
112 load the records into PostgreSQL. The <option>–order</option> and <option>–autoprimary</option> command line options (bre, mrd, mfr, etc) map to class IDs defined in
\r
113 <filename>/openils/conf/fm_IDL.xml</filename>.</para>
\r
116 <title>Adding Metarecords to the Database</title>
\r
117 <indexterm><primary>migrating</primary><secondary>adding metarecords</secondary></indexterm>
\r
118 <para>One you have loaded the records into PostgreSQL, you can create metarecord entries in the <link linkend="metabib.table.metarecord">metabib.metarecord</link> table by running the following <systemitem>SQL</systemitem>:</para>
\r
120 <userinput>psql evergreen</userinput>
\r
121 <userinput># \i /home/opensrf/Evergreen-ILS-1.6*/src/extras/import/quick_metarecord_map.sql</userinput>
\r
123 <para>Metarecords are required to place holds on items, among other actions.</para>
\r
129 <section xml:id="migrating_records_using_migration_tools">
\r
130 <title>Migrating Bibliographic Records Using the ESI MigUration Tools</title>
\r
131 <indexterm><primary>migrating</primary><secondary>Migration Tools</secondary></indexterm>
\r
132 <para>The following procedure explains how to migrate bibliographic records from marc records into Evergreen. It does not cover exporting records from specific proprietary ILS
\r
133 systems. For assistance with exporting records from your current system please refer to the manuals for your system or you might try to ask for help from the <link linkend="more_info">Evergreen community</link>.</para>
\r
137 <para>Download the Evergreen <link xl:href="http://git.esilibrary.com/?p=migration-tools.git;a=summary"
\r
138 xl:title="Equinox migration utilities">migration utilities</link> from the git repository.</para>
\r
139 <para>Use the command <command>git clone git://git.esilibrary.com/git/migration-tools.git</command> to clone the migration tools.</para>
\r
140 <para>Install the migration tools:</para>
\r
144 cd migration-tools/Equinox-Migration
\r
154 <para>Dump marc records into MARCXML using <systemitem>yaz-marcdump</systemitem></para>
\r
159 echo '<?xml version="1.0" encoding="UTF-8" ?>' > imported_marc_records.xml
\r
160 yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> imported_marc_records.xml
\r
166 <para>Test validity of XML file using <systemitem>xmllint</systemitem></para>
\r
172 xmllint --noout imported_marc_records.xml 2> marc.xml.err
\r
179 <para>Clean up the marc xml file using the <systemitem>marc_cleanup</systemitem> utility:</para>
\r
182 marc_cleanup --marcfile=imported_marc_records.xml --fullauto [--renumber-from #] -ot 001
\r
185 <para>The <option>--renumber-from</option> is required if you have bibliographic records already in your system. Use this to set the starting id number higher
\r
186 then the last id in the biblio.record_entry table. The marc_cleanup command will generate a file called <filename>clean.marc.xml</filename></para>
\r
189 <para>Create a fingerprinter file using the <systemitem>fingerprinter</systemitem> utility:</para>
\r
192 fingerprinter -o incumbent.fp -x incumbent.ex clean.marc.xml
\r
194 <para><systemitem>fingerprinter</systemitem> is used for deduplification of the incumbent records. The <option>-o</option> option specifies the
\r
195 output file and the <option>-x</option> option is used to specify the error output file.</para>
\r
199 <para>Create a fingerprinter file for existing Evergreen bibliographic records using the <systemitem>fingerprinter</systemitem> utility if you
\r
200 have existing bibliographic records in your system previously imported:</para>
\r
203 fingerprinter -o production.fp -x production.fp.ex --marctype=MARC21 existing_marc_records.mrc --tag=901 --subfield=c
\r
206 <para><systemitem>fingerprinter</systemitem> is used for deduplification of the incumbant records.</para>
\r
210 <para>Create a merged fingerprint file removing duplicate records.</para>
\r
213 cat cat production.fp incumbent.fp | sort -r > dedupe.fp
\r
214 match_fingerprints [-t start id] -o records.merge dedupe.fp
\r
220 <para>Create a new import XML file using the <systemitem>extract_loadset</systemitem> utility</para>
\r
222 <userinput>extract_loadset -l 1 -i clean.marc.xml -o merged.xml records.merge</userinput>
\r
226 <para>Extract all of the currently used TCN's an generate the .bre and .ingest files to prepare for the bibliographic record load.</para>
\r
229 psql -U evergreen -c "select tcn_value from biblio.record_entry where not deleted" | perl -npe 's/^\s+//;' > used_tcns
\r
230 marc2bre.pl --idfield 903 [--startid=#] --marctype=XML -f final.xml --used_tcn_file=used_tcns > evergreen_bre_import_file.bre
\r
234 <para> The option <option>--startid</option> needs to match the start id used in earlier steps and must be higher than largest id value
\r
235 in the biblio.record_entry table. the option <option>--idfield</option> should match the marc datafield used to store your records ids.</para>
\r
240 <para>Ingest the bibliographic records into the Evergreen database.</para>
\r
244 direct_ingest.pl < evergreen_bre_import_file.bre > evergreen_ingest_file.ingest
\r
245 parallel_pg_loader.pl \
\r
260 -a msefe evergreen_ingest_file.ingest
\r
266 <para>Load the records using psql and the sql scripts generated from the previous step.</para>
\r
270 psql -U evergreen < pg_loader-output.sql > load_pg_loader-output
\r
271 psql -U evergreen < create_metabib.sql > log.create_metabib
\r
277 <para>Extract holdings from marc records for importing copies into Evergreen using the <systemitem>extract_holdings</systemitem> utility.</para>
\r
280 extract_holdings --marcfile=marc.clean.xml --holding 999 --copyid 999i --map holdings.map
\r
283 <para>This command would extract holdings based on the 949 datafield in the marc records. The copy id is generated from the subfile b in the 949 datafield. You may
\r
284 need to adjust these options based on the field used for holdings informatiom in your marc records.</para>
\r
285 <para>The <option>map</option> option <filename>holdings.map</filename> refers to a file to be used for mapping subfields to the holdings data you would like extracted. Here is an example based on mapping holdings data to the 999 data field:</para>
\r
292 circ_modifier 999 t
\r
295 <para>Running the extract holdings script should produce the sql file <filename>HOLDINGS.pg</filename> similar to:</para>
\r
296 <programlisting language="sql">
\r
299 egid, hseq, l_call_num, l_barcode, l_location, l_owning_lib, l_circ_modifier,
\r
300 40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK
\r
301 41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK
\r
302 41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK
\r
306 <para>This file can be used for importing holdings into Evergreen. the <database class="field">egid</database> is a critical column. It is used to link the volume and copy to
\r
307 the bibliographic record. Please refer to <link linkend="migratingbibrecordcopies">for the steps to import your holdings into Evergreen.</link></para>
\r
313 <section xml:id="migratingbibrecordcopies">
\r
314 <title>Adding Copies to Bibliographic Records</title>
\r
315 <para>Before bibliographic records can be found in an OPAC search copies will need to be created. It is very important to understand how various tables related to each other in regards
\r
316 to holdings maintenance.</para>
\r
317 <para>The following procedure will guide you through the process of populating Evergreen with volumes and copies. This is a very simple example. The SQL queries may need to be adjusted
\r
318 for the specific data in your holdings.</para>
\r
321 <para>Create a staging_items staging table to hold the holdings data:</para>
\r
322 <programlisting language="sql">
\r
323 CREATE TABLE staging_items (
\r
324 callnum text, -- call number label
\r
325 bibkey int, -- biblio.record_entry_id
\r
330 owning_lib text -- actor.org_unit.shortname
\r
335 <para>Login to the Evergreen using psql and run the following COPY command to copy the items generated from extract_holdings utility:</para>
\r
336 <programlisting language="sql">
\r
337 COPY staging_items (bibkey, id, callnum, barcode, location, owning_lib, item_type) FROM 'HOLDINGS.pg';
\r
339 <para>the file <filename>HOLDINGS.pg</filename> and/or the COPY query may need to be adjusted for your particular circumstances.</para>
\r
344 <para>Generate shelving locations from your staging table.</para>
\r
345 <programlisting language="sql">
\r
346 INSERT INTO asset.copy_location (name, owning_lib)
\r
347 SELECT DISTINCT l.location, ou.id
\r
348 FROM staging_items l
\r
349 JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname);
\r
353 <para>Generate circulation modifiers from your staging table.</para>
\r
354 <programlisting language="sql">
\r
355 INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magnetic_media)
\r
356 SELECT DISTINCT item_type AS code,
\r
358 LOWER(item_type) AS description,
\r
359 '001' AS sip2_media_type,
\r
360 FALSE AS magnetic_media
\r
362 WHERE item_type NOT IN (SELECT code FROM config.circ_modifier);
\r
366 <para>Generate call numbers from your staging table:</para>
\r
367 <programlisting language="sql">
\r
368 INSERT INTO asset.call_number (creator,editor,record,label,owning_lib)
\r
369 SELECT DISTINCT 1, 1, b.id, l.callnum, ou.id
\r
370 FROM staging.staging_items l
\r
371 JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname);
\r
375 <para>Generate copies from your staging table:</para>
\r
376 <programlisting language="sql">
\r
377 INSERT INTO asset.copy (
\r
378 circ_lib, creator, editor, create_date, barcode,
\r
379 STATUS, location, loan_duration, fine_level, circ_modifier, deposit, ref, call_number)
\r
381 SELECT DISTINCT ou.id AS circ_lib,
\r
384 l.createdate AS create_date,
\r
385 l.barcode AS barcode,
\r
388 2 AS loan_duration,
\r
390 l.item_type AS circ_modifier,
\r
393 WHEN l.item_type = 'REFERENCE' THEN TRUE
\r
396 cn.id AS call_number
\r
397 FROM staging.staging_items l
\r
398 JOIN actor.org_unit ou
\r
399 ON (l.owning_lib = ou.shortname)
\r
400 JOIN asset.copy_location cl
\r
401 ON (ou.id = cl.owning_lib AND l.location = cl.name)
\r
402 JOIN asset.call_number cn
\r
403 ON (ou.id = cn.owning_lib
\r
404 AND l.callnum = cn.label);
\r
406 <para>You should now have copies in your Evergreen database and should be able to search and find the bibliographic records with attached copies.</para>
\r
410 <section xml:id="migratingpatrons">
\r
411 <title>Migrating Patron Data</title>
\r
412 <indexterm><primary>migrating</primary><secondary>importing patrons</secondary></indexterm>
\r
414 This section will explain the task of migrating your patron data from comma delimited files<indexterm><primary>comma delimited files</primary></indexterm> into Evergreen.
\r
415 It does not deal with the process of exporting from the non-Evergreen
\r
416 system since this process may vary depending on where you are extracting your patron records. Patron could come from an ILS or it could come from a student database in the case of
\r
419 <para>When importing records into Evergreen you will need to populate 3 tables in your Evergreen database:</para>
\r
421 <listitem><link linkend="actor.table.usr">actor.usr</link> - The main table for user data</listitem>
\r
422 <listitem><link linkend="actor.table.card">actor.card</link> - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;</listitem>
\r
423 <listitem><link linkend="actor.table.usr-address">actor.usr_address</link> - Used for storing address information; A user can have more than one address.</listitem>
\r
425 <para>Before following the procedures below to import patron data into Evergreen, it is a good idea to examine the fields in these tables in order to decide on a strategy
\r
426 for data to include
\r
427 in your import. It is important to understand the data types and constraints on each field.</para>
\r
430 <para>Export the patron data from your existing ILS or from another source into a comma delimited file. The comma delimited file used for importing
\r
431 the records should use Unicode (UTF8) <indexterm><primary>Unicode</primary></indexterm> character encoding.</para>
\r
434 <para>Create a staging table.<indexterm><primary>staging table</primary></indexterm> A staging table will allow you to tweak the data before importing.
\r
435 Here is an example sql statement:</para>
\r
436 <indexterm><primary>sql</primary></indexterm>
\r
437 <programlisting language="sql">
\r
438 CREATE TABLE students (
\r
439 student_id int, barcode text, last_name text, first_name text, program_number text,
\r
440 program_name text, email text, address_type text, street1 text, street2 text,
\r
441 city text, province text, country text, postal_code text, phone text, profile int,
\r
442 ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text,
\r
443 net_access_level int DEFAULT 2, password text
\r
446 <para>Note the <varname>DEFAULT</varname> variables. These allow you to set default for your library or to populate required fields if you data allows
\r
447 <systemitem>NULL</systemitem> values where fields are required in Evergreen.</para>
\r
450 <para>Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example of sql to adjust phone numbers in the staging
\r
451 table to fit the evergreen field:</para>
\r
452 <programlisting language="sql">
\r
453 UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') ||
\r
454 substring(phone from 10), '(', ''), ')', ''), ' ', '-');
\r
456 <para>Data <quote>massaging</quote> may be required to fit formats used in Evergreen.</para>
\r
459 <para>Insert records from the staging table into the <link linkend="actor.table.usr">actor.usr</link> Evergreen table:</para>
\r
460 <programlisting language="sql">
\r
461 INSERT INTO actor.usr (
\r
462 profile, usrname, email, passwd, ident_type, ident_value, first_given_name,
\r
463 family_name, day_phone, home_ou, claims_returned_count, net_access_level)
\r
464 SELECT profile, students.usrname, email, student_id, ident_type, student_id,
\r
465 first_name, last_name, phone, home_ou, claims_returned_count, net_access_level
\r
470 <para>insert records into <link linkend="actor.table.card">actor.card</link> from <link linkend="actor.table.usr">actor.usr</link>.</para>
\r
471 <programlisting language="sql">
\r
472 INSERT INTO actor.card (usr, barcode)
\r
473 SELECT actor.usr.id, students.barcode
\r
475 INNER JOIN actor.usr
\r
476 ON students.usrname = actor.usr.usrname;
\r
478 <para>This assumes a one to one card patron relationship. If your patron data import has multiple cards assigned to one patron more complex import scripts may be required which look for inactive or active flags.</para>
\r
481 <para>Update actor.usr.card field with actor.card.id to associate active card with the user:</para>
\r
482 <programlisting language="sql">
\r
484 SET card = actor.card.id
\r
486 WHERE actor.card.usr = actor.usr.id;
\r
490 <para>Insert records into <link linkend="actor.table.usr-address">actor.usr_address</link> to add address information for users:</para>
\r
491 <programlisting language="sql">
\r
492 INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code)
\r
493 SELECT actor.usr.id, students.street1, students.street2, students.city, students.province,
\r
494 students.country, students.postal_code
\r
496 INNER JOIN actor.usr ON students.usrname = actor.usr.usrname;
\r
500 <para>update <link linkend="actor.table.usr-address">actor.usr.address</link> with address id from address table.</para>
\r
501 <programlisting language="sql">
\r
503 SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id
\r
504 FROM actor.usr_address
\r
505 WHERE actor.usr.id = actor.usr_address.usr;
\r
507 <para>This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL.</para>
\r
511 <title>Creating an sql Script for Importing Patrons</title>
\r
512 <para>The procedure for importing patron can be automated with the help of an sql script. Follow these steps to create an import script:</para>
\r
516 <para>Create an new file and name it <filename>import.sql</filename></para>
\r
521 <para>Edit the file to look similar to this:</para>
\r
525 -- Create staging table.
\r
526 CREATE TABLE students (
\r
527 student_id int, barcode text, last_name text, first_name text, program_number text,
\r
528 program_name text, email text, address_type text, street1 text, street2 text,
\r
529 city text, province text, country text, postal_code text, phone text, profile int,
\r
530 ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text,
\r
531 net_access_level int DEFAULT 2, password text
\r
535 --Insert records from the staging table into the actor.usr table.
\r
536 INSERT INTO actor.usr (
\r
537 profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name,
\r
538 day_phone, home_ou, claims_returned_count, net_access_level)
\r
539 SELECT profile, students.usrname, email, student_id, ident_type, student_id, first_name,
\r
540 last_name, phone, home_ou, claims_returned_count, net_access_level FROM students;
\r
542 --Insert records from the staging table into the actor.usr table.
\r
543 INSERT INTO actor.card (usr, barcode)
\r
544 SELECT actor.usr.id, students.barcode
\r
546 INNER JOIN actor.usr
\r
547 ON students.usrname = actor.usr.usrname;
\r
549 --Update actor.usr.card field with actor.card.id to associate active card with the user:
\r
551 SET card = actor.card.id
\r
553 WHERE actor.card.usr = actor.usr.id;
\r
555 --INSERT records INTO actor.usr_address from staging table.
\r
556 INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code)
\r
557 SELECT actor.usr.id, students.street1, students.street2, students.city, students.province,
\r
558 students.country, students.postal_code
\r
560 INNER JOIN actor.usr ON students.usrname = actor.usr.usrname;
\r
563 --Update actor.usr mailing address with id from actor.usr_address table.:
\r
565 SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id
\r
566 FROM actor.usr_address
\r
567 WHERE actor.usr.id = actor.usr_address.usr;
\r
571 <para>Placing the sql statements between <code>BEGIN;</code> and <code>COMMIT;</code> creates a transaction block so that if any sql statements fail, the
\r
572 entire process is canceled and the database is rolled back to its original state. Lines beginning with <code>--</code> are comments to let you you what
\r
573 each sql statement is doing and are not processed.</para>
\r
578 <title>Batch Updating Patron Data</title>
\r
579 <indexterm><primary>migrating</primary><secondary>batch updating patrons</secondary></indexterm>
\r
580 <para>For academic libraries, doing batch updates to add new patrons to the Evergreen database is a critical task. The above procedures and
\r
581 import script can be easily adapted to create an update script for importing new patrons from external databases. If the data import file contains only new patrons, then,
\r
582 the above procedures will work well to insert those patrons. However, if the data load contains all patrons, a second staging table and a procedure to remove existing patrons from that second staging table may be required before importing the new patrons. Moreover, additional steps to update address information and perhaps delete
\r
583 inactive patrons may also be desired depending on the requirements of the institution.</para>
\r
584 <para>After developing the scripts to import and update patrons have been created, another important task for library staff is to develop an import strategy and schedule
\r
585 which suits the needs of the library. This could be determined by registration dates of your institution in the case of academic libraries. It is important to balance
\r
586 the convenience of patron loads and the cost of processing these loads vs staff adding patrons manually.</para>
\r
589 <section xml:id="electronicresourcesvisible">
\r
590 <title>Making Electronic Resources Visible in the Catalogue</title>
\r
591 <indexterm><primary>migrating</primary><secondary>importing bibliographic records</secondary><tertiary>electronic resources </tertiary></indexterm>
\r
592 <para>For electronic resources that should be visible in the catalogue without any copies, you must set the source column value in the record.biblio_entry row for the respective
\r
593 bibliographic record to a value that matches the corresponding ID of the <link linkend="config.table.bib-source">config.bib_source</link> where the transcendant value is
\r
594 <emphasis>TRUE</emphasis>. Here's a practical example:</para>
\r
597 <para>Connect to your Evergreen database with <command>psql</command> (substitute username / database name as required):</para>
\r
599 <userinput>psql -U postgres evergreen</userinput>
\r
603 <para>Add a source for your electronic resources:</para>
\r
606 # INSERT INTO config.bib_source(quality, source, transcendant)
\r
607 VALUES (50, 'Institutional repository', TRUE);</userinput>
\r
611 <para>Find the ID that was generated for your new source:</para>
\r
613 <userinput># SELECT ID FROM config.bib_source WHERE source = 'Institutional repository';</userinput>
\r
617 <para>Update the source column for your bibliographic record for the electronic resource (for the sake of the example, let's assume that the ID returned from the new
\r
618 source was 4, and that we know that the bib record with ID 75 is an electronic resource from your institutional repository):</para>
\r
620 <userinput># UPDATE biblio.record_entry SET source = 4 where biblio.record_entry.id=75;</userinput>
\r
625 <section xml:id="emptydatabase">
\r
626 <title>Restoring your Evergreen Database to an Empty State</title>
\r
627 <para>If you've done a test import of records and you want to quickly get Evergreen back to a pristine state, you can create a clean Evergreen database schema by performing the
\r
632 <userinput>cd ILS/Open-ILS/src/sql/Pg/</userinput>
\r
636 <para>Rebuild the database schema:</para>
\r
638 <userinput>./build-db.sh [db-hostname> [db-port] [db-name] [db-user] [db-password] [db-version]</userinput>
\r
640 <caution><para>This will remove all of your data from the database and restore the default values.</para></caution>
\r