phasefx [Mon, 22 Nov 2010 20:38:55 +0000 (20:38 +0000)]
append to bottom of list for xul-based hold list interfaces. The result of this is that rows appended off-screen (with just the hold id) will not make a network request for fleshing until they either become visible, or a column sort action is initiated
gmc [Mon, 22 Nov 2010 17:17:13 +0000 (17:17 +0000)]
bug #680096 - upgrade script to partially reingest bibs after upgrade to 2.0
This solves the problem of the new facets sidebar showing up empty in
OPAC search results. Since the process of populating metabib.facet_entry
and updating metabib.*_field_entry can take a long time on large databases,
the update SQL is generated by a separate script, reingest-1.6-2.0.pl. Usage
from an example run is:
./reingest-1.6-2.0.pl: generate SQL script to reingest bibs during an upgrade to Evergreen 2.0
By default, the script writes to the file reingest-1.6-2.0.sql. To modify
this script's behavior, you can supply the following options:
--config /path/to/opensrf_core.xml used to get connection information to
the Evergreen database
--chunk_size n number of bibs to reingest in a chunk;
specify if you don't want all of the
bibs in the database to be reindexes
in a single transaction
--output /path/to/output_file.sql path of output SQL file
Writing output to file reingest-1.6-2.0.sql
SQL script complete. To perform the reingest, please run the script using
the psql program, e.g.,
If you are running a large Evergreen installation, it is recommend that you
examine the script first; note that a reingest of a large Evergreen database
can take several hours.
gmc [Mon, 22 Nov 2010 13:38:53 +0000 (13:38 +0000)]
parallel fine generator
The fine generator cronjob can now use multiple
parallel processes by setting fine_generator/parallel
in opensrf.xml to a value greater than 1. This
can speed up periodic fine generation in a database
containing a large number of overdue loans.
Also added a service to return just the list of
IDs of overdue loans and reservations - fleshing
the entire set of overdue loans when generating fines
has been observed to cause significant swap-thrashing in
at least one large database.
gmc [Mon, 22 Nov 2010 13:38:51 +0000 (13:38 +0000)]
hold targeter: add option to run parallel targeter processes
Permit the hold targeter to divvy up the work and run more than one process
at a time by setting the opensrf.xml setting hold_targeter/parallel to a
value greater than one. Doing so can significantly reduce the
time it takes to (re)target a large number of hold requests, although
only up to a point (in other words, if increasing the number of parallel
targeters beyond one, it is recommended to do so slowly.)
dbs [Sat, 20 Nov 2010 19:56:10 +0000 (19:56 +0000)]
Address maintain_control_numbers() database function bug #677160
Jason Stephenson reported a bug handling records with multiple
001 or 003 fields, and supplied a set of test records to
reproduce the condition. The bug caused the ingest process
to throw a database error, rolling back the transaction and
preventing the actual ingest of those records.
The solution was to simplify the logic in maintain_control_numbers().
Now, in the case that there are either multiple 001s or 003s in the
incoming record, we simply delete all of the 003s and 001s and
create the desired 001 and 003. Also, if there are not exactly one
001 and one 003 in the incoming record, we do not try to preserve
one of those values in the 035 as it would be close to meaningless.
Many thanks to Jason for the clear bug report and test cases!
gmc [Thu, 18 Nov 2010 16:31:36 +0000 (16:31 +0000)]
use pcrud auto-complete widget when selecting providers
Fixes general slowness working with invoice and PO forms if
more than a couple hundred providers are defined.
This could be generalized with a bit of work with Fieldmapper
to define "has-one-chosen-by-user-from-cast-of-thousands"
relationships that should trigger use the auto-complete widget.
gmc [Thu, 18 Nov 2010 16:31:34 +0000 (16:31 +0000)]
fetchItemByIdentity now returns null immediately upon null identity
Besides being trivially more efficient, this avoids a situation
where pcrud can sometimes time out when attempting to retrieve
a row by a null PK value.
erickson [Thu, 18 Nov 2010 15:37:11 +0000 (15:37 +0000)]
fetch more than the default 10 (per page) distrib formulas; 500 is arbitrary, but still notably less than the infinity that was in effect before paging. todo: research optiosn for using the new autofieldwidget/dojo/pcrud-store instead
senator [Wed, 17 Nov 2010 22:41:33 +0000 (22:41 +0000)]
Backport r18762 from trunk:
Wonder of wonders, a Dojo data store supporting lazy loading objects via pcrud!
So openils.PermaCrud.Store was dreamt up and directed by Mike Rylander, and
implemented by me. Right now it gives us a new way to provide widgets for
selecting objects in Dojo-based interfaces.
Where previously we had some dropdowns here and there that really shouldn't
be dropdowns (such as one for selection lists in Acq, and several for resources
and resource types in Booking -- these examples I've replaced, but there are
surely more) because loading a dropdown with potentially zillions of items
to choose from can take forever and break the interface, now we can have
autocompleting textboxes that only load items matching what you type (and
even then with a low-ish default limit so that if you're vague in your input
you still don't get huge unwieldy result sets).
Easiest way to see an example is if you already have any acq selection lists.
Just go to any catalog record, choose Actions for this Record, choose View/Place
orders, then click "Add to Selection List." In the resulting dialog, that
second field used to be a dropdown, but now it's an autocompleting textbox.
Alternatively, you can see these in the affected booking interfaces (see files
modified in this commit) under Admin -> Server Administration -> Booking.
The future promises even better things for this store. When it implements the
Dojo Write API, interfaces using grids can potentially be vastly simplified
by relying on the store to save its own dirty objects. The Notification API
would facilitate easy use of single stores with multiple widgets. All good
things for faster-to-write interfaces.
erickson [Mon, 15 Nov 2010 20:26:44 +0000 (20:26 +0000)]
return AutoIDL to its original state of loading the while IDL if no classes are selected. This will ease the process of moving to /IDL2js; other minor cleanup
dbs [Mon, 15 Nov 2010 05:24:38 +0000 (05:24 +0000)]
Prevent creation of authority records that are truncated by one letter
The summarizeField() function grabbed the values of the XUL elements,
which were set by the keypress event listeners on the XUL elements.
However, the keypress event listener seems to capture the value of
the XUL element before the value of the new key is appended to the
existing value in a textbox - so, when you typed a new subfield, then
right-clicked to create an authority, the value that was captured was
missing the final character.
Adding the "input" event to the registered listeners captures the
actual value for creating an authority and solves the problem. It
might be possible to remove the keypress event listeners, but for
now we'll take the cautious route.
dbs [Mon, 15 Nov 2010 04:52:59 +0000 (04:52 +0000)]
Fix negative paging issue in open-ils.supercat.authority.author.startwith
When paging backwards through authority lists, we were skipping the
first page of results. By reducing the offset by the value of one
page, we restore the expected order.
The same problem might affect other paging interfaces: to be determined.
dbs [Mon, 15 Nov 2010 04:52:42 +0000 (04:52 +0000)]
Do not cache the authority context menu
Caching would be great, except when you add an authority in the
flow and you expect to see it the next time you right-click
on the authority that you just added.
erickson [Fri, 12 Nov 2010 19:51:26 +0000 (19:51 +0000)]
Back-port of 18712 and related changes: Server-generated IDL JS
The goal is to reduce use of pre-onload XHR, which is known to cause
problems (dreaded white-screen-of-death) in firefox/xulrunner. Change
allows opac, staff client, and embedded browser interfaces to load
a pre-formatted JS object instead of IDL XML via XHR. In addition to
dropping the XHR, clients no longer parse the XML, which should reduce
page render time. Finally, in the staff interfaces, the full IDL is
once again loaded, so there is no need to specifiy per-page classes.
Per-page classes are still supported and used in the OPAC to reduce the
up-front load time.
Change requires an update to the Evergreen Apache config.
Part of this change included condensing fieldmapper.hash and
fielmapper.dojoData content into fieldmapper.Fieldmapper to avoid
circular dependencies, which was causing problems with IE. Will
eventually want to deprecate .hash and .dojoData, but for now they still
function as before.
dbs [Thu, 11 Nov 2010 17:02:03 +0000 (17:02 +0000)]
Avoid munging 035 when a new record is created
If there is no 003 in the record when it is created, then we will not attempt
to generate a 035. If the incoming record contains a 001 and 003, then we will
create a 035.
dbs [Thu, 11 Nov 2010 17:01:32 +0000 (17:01 +0000)]
Do not supply a default value for 003 in new authority records
With cat.maintain_control_numbers enabled by default, we can trust
the database trigger to create the appropriate 003 for us - and by
not supplying a 003 in the new record, we won't create a spurious
035 for a brand new record.
dbs [Thu, 11 Nov 2010 04:18:14 +0000 (04:18 +0000)]
Update the edit dates for authority and MFHD records when they are edited
Addresses the oversight in the original implementation that missed this;
important if we're going to differentiate between creating and editing
a record for triggers.
dbs [Wed, 10 Nov 2010 22:31:12 +0000 (22:31 +0000)]
Enable "maintain control numbers" and "record ID as TCN" behavior by default
Per http://ur1.ca/2bgc4, this behavior hews more closely to the MARC21
specification. Note, however, that duplicate bib detection in the
"Import via Z39.50" interface will be somewhat affected; a more
trustworthy workaround is to include the "Local catalog" in Z39.50 searches to
determine if a matching record already exists in the database.
gmc [Wed, 10 Nov 2010 13:19:03 +0000 (13:19 +0000)]
do not use TRUNCATE when refreshing reporter.materialized_simple_record
Previous behavior would break Slony replication after doing a
bib load. Since a deletion is slower than a truncate, if you're
not using Slony replication, you may prefer to truncate rmsr
prior to calling reporter.enable_materialized_simple_record_trigger.
miker [Mon, 8 Nov 2010 16:24:47 +0000 (16:24 +0000)]
Backporting r18652: Teach vandelay.replace_field to be a little smarter by allowing simple cases of both replacing and regexp-testing the same subfield
erickson [Mon, 8 Nov 2010 16:21:45 +0000 (16:21 +0000)]
holds retrieval API call sorting and cleanup; sort non-cancelled holds by ready-for pickup, then active, then suspended; use json_query to fetch IDs first, so id_list calls can avoid fetching the full hold objects in the ML. sort fleshed transit by send time to pick up the latest transit
dbs [Mon, 8 Nov 2010 00:39:19 +0000 (00:39 +0000)]
Improve Fedora prerequisite installer
* Explicitly install wget, which isn't installed in a minimal install
* Hack JavaScript-SpiderMonkey Makefile.PL for 32-bit Fedora
* Provide a fedora14 target
* Change "fedora-13" to "fedora13" to match OpenSRF prereq installer
gmc [Fri, 5 Nov 2010 16:46:40 +0000 (16:46 +0000)]
fix user password reset request time column def
Needs to be a timestamp with time zone; fixes a bug
where it was interpreted as a UTC time, throwing off
the calculation of the expiration of the password reset
request.
gmc [Thu, 4 Nov 2010 01:04:15 +0000 (01:04 +0000)]
add TT helper to squeeze strings into the UNOB straitjacket
The force_jedi_unob helper strips non-ASCII characters from
template output but does so nicely by converting the string
to NFD first. This is meant to be used by EDI templates, although
could be useful for A/T output that goes to receipt
printers that cannot print diacritics.
erickson [Tue, 2 Nov 2010 02:56:02 +0000 (02:56 +0000)]
in SIP patron retrieval, only fetch non-archived penalties and penalties that that matter (fines, overdues, blocking penalties). pair down the penalty comparisons to avoid fleshing the penalty type, potentially numerous times for a given type, by using the constant identifiers
dbs [Tue, 2 Nov 2010 02:47:47 +0000 (02:47 +0000)]
Enable merge of authority records to do the right thing
The target and source authority record was flipped, causing
the update to fail. In reconsidering this function, it is not
necessary to change the contents of the source authority record
just to propagate the content of the target authority record
to any linked bibliographic records.
Instead, take the approach of updating the ID of the controlled
field in the bib record, then temporarily set "reingest on same
MARC" to TRUE and update the target authority record by setting
deleted = FALSE (which propagates the "changes" to the linked
bib records), then set "reingest on same MARC" flag back to its
original value.
gmc [Mon, 1 Nov 2010 23:24:23 +0000 (23:24 +0000)]
yet another replication race condition fix
Fixes problems that can occur when creating a lineitem
from an existing bib in the catalog; adds an authoritative
version of open-ils.acq.lineitem.retrieve.
dbs [Mon, 1 Nov 2010 20:42:00 +0000 (20:42 +0000)]
Replace hard-coded '(CONS)' for MARC control number identifier in authority records
We created an actor.org_unit_setting, 'cat.marc_control_number_identifier', for
specifying the preferred MARC control number identifier, but when we create a
new authority record from the MARC editor, the hardcoded value of 'CONS' is being
used.
This teaches the staff client how to pull the appropriate value from the AOUS
when invoking the MARC Editor.
dbs [Mon, 1 Nov 2010 19:46:15 +0000 (19:46 +0000)]
Ensure that changes to authority records propagate to linked bibliographic records
Per https://bugs.launchpad.net/evergreen/+bug/669596, updated authority records
weren't being reflected in bibliographic records with fields that link to those
authority records. We were missing the call to authority.propaagate_changes()
withing the ingest trigger on authority.record_entry.