miker [Fri, 10 Dec 2010 17:22:15 +0000 (17:22 +0000)]
Patch from James Fournie to address https://bugs.launchpad.net/evergreen/+bug/622908 wherein we learn that related item physical description might be used as the main PD of the main item, if the main item lacks such a field in the MARC
miker [Fri, 10 Dec 2010 16:03:49 +0000 (16:03 +0000)]
Provide a mechanism to load any random JS file via dojo.require()-ish syntax.
Why would we want to do such a thing, you might ask?
Well, the short answer is that Firefox hates pages that have more than one script block (inline is worse than tag) that contains pre-onLoad XHR. So, this allows us to pull the actual loading of JS from the same domain as the page into an inline block. This allows us to eliminate the WSOD on FF by pulling all (dangerous) JS into a single, final inline block, after which we don't care if the DOMContentLoaded event fires -- that's when it should fire, structurally -- but in FF it may fire for a different reason (bug) than it should (fell of the end of the page in the rendering engine).
miker [Fri, 10 Dec 2010 14:27:37 +0000 (14:27 +0000)]
Backporting r18957 from trunk:
Fix two bugs:
* Wide Character warning in authority.generate_overlay_template due to the generated template not being UTF-8 encoded internally
* Correctly test the same space-normalization form of the pre- and post-strip records during the application of a replace rule in vandelay.replace_field
This addresses https://bugs.launchpad.net/evergreen/+bug/687996 for 2.0beta5
senator [Tue, 7 Dec 2010 22:43:55 +0000 (22:43 +0000)]
Backport r18931 from trunk
Serials: When the fully compressed serial holdings are active in the OPAC,
you get this "issues held" display with an expand/compress toggle that will
either show you individual holdings (and allow you to place holds on them)
or compressed holdings statements.
The functionality existed in trunk before this commit, but this cleans it up
and makes it better. It's more consistent with the the result detail table,
it doesn't offer you the change to place holds on issues that don't have
units (copy-equivalent objects), etc etc.
dbs [Wed, 1 Dec 2010 19:12:16 +0000 (19:12 +0000)]
Return the copy status name when a copy is not available
It looks like the checkout operation used to return a fleshed
config.copy_status object, but that changed and we now get a
raw ccs ID back.
Retrieve the status name using the ccs ID and present that
to the users. Also, in case problems like this happen in
the future, provide a more specific error message and var
name so that it will be (hopefully!) a little clearer what
payload was expected in the first place :)
dbs [Tue, 30 Nov 2010 21:11:35 +0000 (21:11 +0000)]
Clean up some of the Apache config mod_rewrite rules
Thomas Berezansky suggested some improvements to the mod_rewrite
rules in eg_vhost.conf on the -devel mailing list; this is a stab
at correcting the most egregious problems.
Tested with Zotero and unAPI still works; tested with the staff
client and language-switching still works, as do the Conify and
Vandelay interfaces. Seems reasonably good.
dbs [Tue, 30 Nov 2010 20:35:09 +0000 (20:35 +0000)]
Enable GET params to be added properly in buildOPACLink()
Symptom was that the "?l=#" parameter wasn't being added to the
home screen "Advanced Search" link. Cause was that the
dojo.addOnLoad(init) call was being made after the
dojo.addOnLoad(home_init) call, which depended on globals being
set by init(). This started happening when the JavaScript was
shifted around in an attempt to kill the white screen of death.
There may be other similar issues in other interfaces; keep
your eyes open!
dbs [Sun, 28 Nov 2010 15:11:02 +0000 (15:11 +0000)]
Address 1.6.1-2.0 upgrade problems reported by Ben Shum
1. We were attempting to update the asset.uri ID sequence value
with the wrong syntax; also, adding just 1 would return an error
in the event that only the seed value for asset.uri had been
inserted.
2. Somehow the body of the maintain_control_numbers() function
was pasted twice, resulting in a syntax error.
senator [Tue, 23 Nov 2010 21:39:09 +0000 (21:39 +0000)]
Backport r18838 from trunk
Serials: Fix error in batch receiving when trying to change the shelving
location of the previous item in the stream when there /is/ no previous
item in the stream
phasefx [Mon, 22 Nov 2010 20:38:55 +0000 (20:38 +0000)]
append to bottom of list for xul-based hold list interfaces. The result of this is that rows appended off-screen (with just the hold id) will not make a network request for fleshing until they either become visible, or a column sort action is initiated
gmc [Mon, 22 Nov 2010 17:17:13 +0000 (17:17 +0000)]
bug #680096 - upgrade script to partially reingest bibs after upgrade to 2.0
This solves the problem of the new facets sidebar showing up empty in
OPAC search results. Since the process of populating metabib.facet_entry
and updating metabib.*_field_entry can take a long time on large databases,
the update SQL is generated by a separate script, reingest-1.6-2.0.pl. Usage
from an example run is:
./reingest-1.6-2.0.pl: generate SQL script to reingest bibs during an upgrade to Evergreen 2.0
By default, the script writes to the file reingest-1.6-2.0.sql. To modify
this script's behavior, you can supply the following options:
--config /path/to/opensrf_core.xml used to get connection information to
the Evergreen database
--chunk_size n number of bibs to reingest in a chunk;
specify if you don't want all of the
bibs in the database to be reindexes
in a single transaction
--output /path/to/output_file.sql path of output SQL file
Writing output to file reingest-1.6-2.0.sql
SQL script complete. To perform the reingest, please run the script using
the psql program, e.g.,
If you are running a large Evergreen installation, it is recommend that you
examine the script first; note that a reingest of a large Evergreen database
can take several hours.
gmc [Mon, 22 Nov 2010 13:38:53 +0000 (13:38 +0000)]
parallel fine generator
The fine generator cronjob can now use multiple
parallel processes by setting fine_generator/parallel
in opensrf.xml to a value greater than 1. This
can speed up periodic fine generation in a database
containing a large number of overdue loans.
Also added a service to return just the list of
IDs of overdue loans and reservations - fleshing
the entire set of overdue loans when generating fines
has been observed to cause significant swap-thrashing in
at least one large database.
gmc [Mon, 22 Nov 2010 13:38:51 +0000 (13:38 +0000)]
hold targeter: add option to run parallel targeter processes
Permit the hold targeter to divvy up the work and run more than one process
at a time by setting the opensrf.xml setting hold_targeter/parallel to a
value greater than one. Doing so can significantly reduce the
time it takes to (re)target a large number of hold requests, although
only up to a point (in other words, if increasing the number of parallel
targeters beyond one, it is recommended to do so slowly.)
dbs [Sat, 20 Nov 2010 19:56:10 +0000 (19:56 +0000)]
Address maintain_control_numbers() database function bug #677160
Jason Stephenson reported a bug handling records with multiple
001 or 003 fields, and supplied a set of test records to
reproduce the condition. The bug caused the ingest process
to throw a database error, rolling back the transaction and
preventing the actual ingest of those records.
The solution was to simplify the logic in maintain_control_numbers().
Now, in the case that there are either multiple 001s or 003s in the
incoming record, we simply delete all of the 003s and 001s and
create the desired 001 and 003. Also, if there are not exactly one
001 and one 003 in the incoming record, we do not try to preserve
one of those values in the 035 as it would be close to meaningless.
Many thanks to Jason for the clear bug report and test cases!
gmc [Thu, 18 Nov 2010 16:31:36 +0000 (16:31 +0000)]
use pcrud auto-complete widget when selecting providers
Fixes general slowness working with invoice and PO forms if
more than a couple hundred providers are defined.
This could be generalized with a bit of work with Fieldmapper
to define "has-one-chosen-by-user-from-cast-of-thousands"
relationships that should trigger use the auto-complete widget.
gmc [Thu, 18 Nov 2010 16:31:34 +0000 (16:31 +0000)]
fetchItemByIdentity now returns null immediately upon null identity
Besides being trivially more efficient, this avoids a situation
where pcrud can sometimes time out when attempting to retrieve
a row by a null PK value.
erickson [Thu, 18 Nov 2010 15:37:11 +0000 (15:37 +0000)]
fetch more than the default 10 (per page) distrib formulas; 500 is arbitrary, but still notably less than the infinity that was in effect before paging. todo: research optiosn for using the new autofieldwidget/dojo/pcrud-store instead
senator [Wed, 17 Nov 2010 22:41:33 +0000 (22:41 +0000)]
Backport r18762 from trunk:
Wonder of wonders, a Dojo data store supporting lazy loading objects via pcrud!
So openils.PermaCrud.Store was dreamt up and directed by Mike Rylander, and
implemented by me. Right now it gives us a new way to provide widgets for
selecting objects in Dojo-based interfaces.
Where previously we had some dropdowns here and there that really shouldn't
be dropdowns (such as one for selection lists in Acq, and several for resources
and resource types in Booking -- these examples I've replaced, but there are
surely more) because loading a dropdown with potentially zillions of items
to choose from can take forever and break the interface, now we can have
autocompleting textboxes that only load items matching what you type (and
even then with a low-ish default limit so that if you're vague in your input
you still don't get huge unwieldy result sets).
Easiest way to see an example is if you already have any acq selection lists.
Just go to any catalog record, choose Actions for this Record, choose View/Place
orders, then click "Add to Selection List." In the resulting dialog, that
second field used to be a dropdown, but now it's an autocompleting textbox.
Alternatively, you can see these in the affected booking interfaces (see files
modified in this commit) under Admin -> Server Administration -> Booking.
The future promises even better things for this store. When it implements the
Dojo Write API, interfaces using grids can potentially be vastly simplified
by relying on the store to save its own dirty objects. The Notification API
would facilitate easy use of single stores with multiple widgets. All good
things for faster-to-write interfaces.
erickson [Mon, 15 Nov 2010 20:26:44 +0000 (20:26 +0000)]
return AutoIDL to its original state of loading the while IDL if no classes are selected. This will ease the process of moving to /IDL2js; other minor cleanup
dbs [Mon, 15 Nov 2010 05:24:38 +0000 (05:24 +0000)]
Prevent creation of authority records that are truncated by one letter
The summarizeField() function grabbed the values of the XUL elements,
which were set by the keypress event listeners on the XUL elements.
However, the keypress event listener seems to capture the value of
the XUL element before the value of the new key is appended to the
existing value in a textbox - so, when you typed a new subfield, then
right-clicked to create an authority, the value that was captured was
missing the final character.
Adding the "input" event to the registered listeners captures the
actual value for creating an authority and solves the problem. It
might be possible to remove the keypress event listeners, but for
now we'll take the cautious route.
dbs [Mon, 15 Nov 2010 04:52:59 +0000 (04:52 +0000)]
Fix negative paging issue in open-ils.supercat.authority.author.startwith
When paging backwards through authority lists, we were skipping the
first page of results. By reducing the offset by the value of one
page, we restore the expected order.
The same problem might affect other paging interfaces: to be determined.
dbs [Mon, 15 Nov 2010 04:52:42 +0000 (04:52 +0000)]
Do not cache the authority context menu
Caching would be great, except when you add an authority in the
flow and you expect to see it the next time you right-click
on the authority that you just added.
erickson [Fri, 12 Nov 2010 19:51:26 +0000 (19:51 +0000)]
Back-port of 18712 and related changes: Server-generated IDL JS
The goal is to reduce use of pre-onload XHR, which is known to cause
problems (dreaded white-screen-of-death) in firefox/xulrunner. Change
allows opac, staff client, and embedded browser interfaces to load
a pre-formatted JS object instead of IDL XML via XHR. In addition to
dropping the XHR, clients no longer parse the XML, which should reduce
page render time. Finally, in the staff interfaces, the full IDL is
once again loaded, so there is no need to specifiy per-page classes.
Per-page classes are still supported and used in the OPAC to reduce the
up-front load time.
Change requires an update to the Evergreen Apache config.
Part of this change included condensing fieldmapper.hash and
fielmapper.dojoData content into fieldmapper.Fieldmapper to avoid
circular dependencies, which was causing problems with IE. Will
eventually want to deprecate .hash and .dojoData, but for now they still
function as before.
dbs [Thu, 11 Nov 2010 17:02:03 +0000 (17:02 +0000)]
Avoid munging 035 when a new record is created
If there is no 003 in the record when it is created, then we will not attempt
to generate a 035. If the incoming record contains a 001 and 003, then we will
create a 035.
dbs [Thu, 11 Nov 2010 17:01:32 +0000 (17:01 +0000)]
Do not supply a default value for 003 in new authority records
With cat.maintain_control_numbers enabled by default, we can trust
the database trigger to create the appropriate 003 for us - and by
not supplying a 003 in the new record, we won't create a spurious
035 for a brand new record.
dbs [Thu, 11 Nov 2010 04:18:14 +0000 (04:18 +0000)]
Update the edit dates for authority and MFHD records when they are edited
Addresses the oversight in the original implementation that missed this;
important if we're going to differentiate between creating and editing
a record for triggers.
dbs [Wed, 10 Nov 2010 22:31:12 +0000 (22:31 +0000)]
Enable "maintain control numbers" and "record ID as TCN" behavior by default
Per http://ur1.ca/2bgc4, this behavior hews more closely to the MARC21
specification. Note, however, that duplicate bib detection in the
"Import via Z39.50" interface will be somewhat affected; a more
trustworthy workaround is to include the "Local catalog" in Z39.50 searches to
determine if a matching record already exists in the database.
gmc [Wed, 10 Nov 2010 13:19:03 +0000 (13:19 +0000)]
do not use TRUNCATE when refreshing reporter.materialized_simple_record
Previous behavior would break Slony replication after doing a
bib load. Since a deletion is slower than a truncate, if you're
not using Slony replication, you may prefer to truncate rmsr
prior to calling reporter.enable_materialized_simple_record_trigger.
miker [Mon, 8 Nov 2010 16:24:47 +0000 (16:24 +0000)]
Backporting r18652: Teach vandelay.replace_field to be a little smarter by allowing simple cases of both replacing and regexp-testing the same subfield
erickson [Mon, 8 Nov 2010 16:21:45 +0000 (16:21 +0000)]
holds retrieval API call sorting and cleanup; sort non-cancelled holds by ready-for pickup, then active, then suspended; use json_query to fetch IDs first, so id_list calls can avoid fetching the full hold objects in the ML. sort fleshed transit by send time to pick up the latest transit