rigrelease:
Sets up branding/xulrunner-stub for release builds
Includes a change to windowssetup.nsi to swap out image set
rigbeta:
Sets up branding/xulrunner-stub for beta builds
Includes a change to windowssetup.nsi to swap out image set
rebuild:
Shortcut for re-using the same version/stamp as the last build
Updated stamp target to add files for rebuild.
Author: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Jason Etheridge <jason@esilibrary.com>
git-svn-id: svn://svn.open-ils.org/ILS/trunk@20253 dcc99617-32d9-48b4-a31d-7c20da2025e4
flesh parts on items for circ functions. Hrmm, but this only works because unflesh_copy doesn't know about parts. Do we need unflesh_copy on checkout/checkin/renew result payloads?
Thanks to http://forums.mozillazine.org/viewtopic.php?f=19&t=2048501,
Author: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Jason Etheridge <jason@esilibrary.com>
git-svn-id: svn://svn.open-ils.org/ILS/trunk@20228 dcc99617-32d9-48b4-a31d-7c20da2025e4
Attempt to be more robust/forceful on clearing hint text
Reset hotkeys less often (such as *not* on operator change)
Re-enable keyset properly
Author: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Jason Etheridge <jason@esilibrary.com>
git-svn-id: svn://svn.open-ils.org/ILS/trunk@20227 dcc99617-32d9-48b4-a31d-7c20da2025e4
New menu items and updates to menus in admin menu area
Toolbar/hotkey settings can be saved to workstation prefs from admin -> workstation administration
Update org unit setting for button_bar to be a string, circ or cat by default to pick those two toolbars
Author: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Jason Etheridge <jason@esilibrary.com>
git-svn-id: svn://svn.open-ils.org/ILS/trunk@20207 dcc99617-32d9-48b4-a31d-7c20da2025e4
Define an Install Tag, defaulting to the product tag.
This will be used in the install location and registry setting for where we installed to.
Different install tags will mean different installs, so Trunk won't (by default) install over 2.1 (for example)
Plus, include riggings for Mike Peter's new install images.
Author: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Jason Etheridge <jason@esilibrary.com>
git-svn-id: svn://svn.open-ils.org/ILS/trunk@20199 dcc99617-32d9-48b4-a31d-7c20da2025e4
We went to the effort of extracting the translatable text from
950.data.seed-values.sql, but had not marked the fields as
translatable in the IDL. Now at least the out-of-the-box
fields and classes will easily be able to have translations.
Correct encoding issue with authority_control_fields.pl
Is there ever a time when MARC::File::XML would be invoked with
anything other than BinaryEncoding => 'utf-8'? Not here, at
least. Addresses LP# 764582.
* Move to in-core fts function, instead of the compat wrapper provided by the tsearch2 contrib
* Provide default cover density tuning (config file)
* Move default preferred language settings from storage to search, where they make more sense
More on the CD tuning:
Evergreen uses a cover density algorithm for calculating relative ranking of matches. There
are several tuning parameters and options available. By default, no document length normalization
is applied. From the Postgres documentation on ts_rank_cd() (the function used by Evergreen):
Since a longer document has a greater chance of containing a query term it is reasonable
to take into account document size, e.g., a hundred-word document with five instances of
a search word is probably more relevant than a thousand-word document with five instances.
Both ranking functions take an integer normalization option that specifies whether and how
a document's length should impact its rank. The integer option controls several behaviors,
so it is a bit mask: you can specify one or more behaviors using | (for example, 2|4).
0 (the default) ignores the document length
1 divides the rank by 1 + the logarithm of the document length
2 divides the rank by the document length
4 divides the rank by the mean harmonic distance between extents (this is implemented only by ts_rank_cd)
8 divides the rank by the number of unique words in document
16 divides the rank by 1 + the logarithm of the number of unique words in document
32 divides the rank by itself + 1
If more than one flag bit is specified, the transformations are applied in the order listed.
It is important to note that the ranking functions do not use any global information, so it
is impossible to produce a fair normalization to 1% or 100% as sometimes desired. Normalization
option 32 (rank/(rank+1)) can be applied to scale all ranks into the range zero to one, but of
course this is just a cosmetic change; it will not affect the ordering of the search results.
In Evergreen, these options are set via search modifiers. The modifiers are mapped in the
following way:
* #CD_logDocumentLength => 1 :: rank / (1 + LOG(total_word_count)) :: Longer documents slightly less relevant
* #CD_documentLength => 2 :: rank / total_word_count :: Longer documents much less relevant
* #CD_meanHarmonic => 4 :: Word Proximity :: Greater matched-word distance is less relevant
* #CD_uniqueWords => 8 :: rank / unique_word_count :: Documents with repeated words much less relevant
* #CD_logUniqueWords => 16 :: rank / (1 + LOG(unique_word_count)) :: Documents with repeated words slightly less relevant
* #CD_selfPlusOne => 32 :: rank / (1 + rank) :: Cosmetic normalization of rank value between 0 and 1
Adding one or more of these to the default_CD_modifiers list will cause all searches that use QueryParser to apply them.
The helper script grab-db-comment.pl is what actually parses out
the comment statements.
To avoid repetition, the list of default SQL scripts to use when
initializing an Evergreen database has been moved to a new file
called sql_file_manifest.
* remove copyright, license verbiage, and C-style comment marking
from the comments; these can live in the SQL scripts
* updated several copyright headers
* minor improvements to documentation of a couple tables
We were 98% of the way there; now we no longer need to
cd into the same directory as the i18n testing scripts
to run them with meaningful output. Should be useful
for adding these to the CI server.
Must have asked this script to check JS files for valid entities
for a reason at some point in the dark past, but it couldn't have
been a very good reason; we're getting a false positive that needs
to be hushed now. Better to just stop looking for XML entities in
JavaScript.
Empty strings in oils_i18n_gettext() throw i18n errors
When you run 'make newpot', if you have an empty string in an
oils_i18n_gettext() function, you'll see errors like:
Error in line 1712 of SQL source file: 'NoneType' object has no attribute 'group'
This satisfies the i18n build process and also serves as a
more evident placeholder for expanded descriptions if someone
feels so inclined in the future.
%SUBSTR(#)%...%SUBSTR_END%
Take substring starting at position # to end of string.
If # is negative count backwards from end of string.
%SUBSTR(#,#)%...%SUBSTR_END%
Same as previous, but limit to second provided number characters after start point.
If second number is negative, count backwards instead of forwards.
TRIM macros inside of SUBSTR will be replaced first, then SUBSTR, then TRIM outside of SUBSTR.
Author: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Thomas Berezansky <tsbere@mvlc.org> Signed-off-by: Jason Etheridge <jason@esilibrary.com>
git-svn-id: svn://svn.open-ils.org/ILS/trunk@20137 dcc99617-32d9-48b4-a31d-7c20da2025e4
Allow NULL "use restriction" fields for located URIs
The asset.uri.use_restriction field, which is really a sort of public notes
field for 856 fields, was grabbing the $u subfield (URL) as a sort of last-gasp
effort to give it some data. However, the effect was rather odd and led to
workarounds like Conifer's skin to avoid displaying the use restriction field
if its value was identical to the URL, etc.
Instead, stop grabbing $u and handle the case where use_restriction column is
NULL gracefully, just like the schema intended.
Delete ##URI## call numbers and uri_call_number_map entries on bib reingest
This approach will lead to some acn/auricnm ID inflation, but it works.
Addresses LP# 761130 (immortal ##URI## entries in asset.call_number) reported
by Ben Shum and LP# 761085 (cannot delete bib with ##URI## volumes) reported
by Jason Etheridge.
Protect dumb JavaScript engines from having to deal with actual Unicode
The holdings_xml format did not include an XML declaration, but adding that
as we do here still does not make the Firefox and Chromium JS engines capable
of consuming XML that contains Unicode content outside of the base ASCII
range.
So, we invoke entityize() to convert anything outside of the realm of
ASCII to XML entities. An alternative would be to invoke entityize() in
OpenILS::Application::SuperCat::unAPI::acn but it's not clear if that
would interfere with any other uses.
With this change, library names / copy location names with Unicode content
can be displayed correctly on the search results page.
At some point (r16750) we started doing a numeric comparison of
$flesh instead of just checking to see if $flesh was defined; this
returned false when $flesh == 'uris', preventing URIs from being
included in the marcxml-uris unAPI format.
This restores URIs to marcxml-uris and so we can revert the extra
BibTemplate call in rdetail_summary.xml.
Specify the holdings_xml unAPI format for URI calls
The unAPI marcxml-uris format is not returning URIs at the moment.
While we're getting that fixed, use the holdings_xml format to
get the URI job done; requires an extra JS call, but that's
better than not working at all.
Escape rather than filter SIMILAR TO metacharacters in patron crazy search
The filtering I introduced in r19983 was overly aggressive, and included
characters that weren't actually SIMILAR TO metacharacters. Instead, escape
each character, carefully going through the list of metacharacters listed at
http://www.postgresql.org/docs/8.4/interactive/functions-matching.html
Works for email addresses like "foo.bar+baz@example.com".
* used version from wiki, which provides same results as the
previous version but performs better on large databases
* now works without editing (a vacuum cannot run inside of a transaction)
* don't do vacuum full, just a regular vacuum analyze
Add support for Multi-Homed Items (aka Foreign Bibs, aka Linked Items)
Evergreen needs to support the ability to attach a barcoded item to more than one bibliographic record. Use cases include:
1. Barcoded E-Readers with preloaded content
* Readers would all be items attached to a single "master" bib record in the traditional way, through call numbers that define their ownership
* Each reader, as a barcoded item, can be attached through Multi-homed Items to records describing the list of preloaded content
* These attached Multi-homed Items can be added and removed as content is swapped out on each reader
2. Dual-language items
* Cataloger decides which of several alternate languages is the primary, and attaches the barcoded item to that record in the traditional way
* Alternate language records are attached to this item through Multi-homed Items
3. "Back-to-back" books -- two books printed upside down relative to one another, with two "front" covers
* Cataloger decides which of the two titles is the primary, and attaches the barcoded item to that record in the traditional way
* Alternate title record is attached to this item through Multi-homed Items
4. Bound Volumes -- Sets of individual works collected into a single barcoded package
* Cataloger decides which of the titles is the primary (or creates a record for the collection as a whole), and attaches the barcoded item to that record in the traditional way
* Remaining title records for the collected peices are attached to this item through Multi-homed Items
Functionality funded by Natural Resources Canada -- http://www.nrcan-rncan.gc.ca/com/
Please see http://git.esilibrary.com/?p=evergreen-equinox.git;a=shortlog;h=refs/heads/multi_home for the full commit history.
patch from Ben Ostrowsky (w/ input) to add support to the Apache redirect module to also optionally read redirect skin and domain from the library IP's configuration file.
Enable marc2sre.pl to run reasonably fast with a large set of bibs
Our previous iteration of marc2sre.pl used an ILIKE stanza
beginning with a wildcard to match system control numbers
without having to specify the institution's MARC code.
This worked, but was painfully slow in large bib sets as
the database needed to use a bitmap index scan to find matches.
By adding a --prefix flag, the user can specify the institutional
MARC code for the set of records and we can use an exact match
against metabib.full_rec.value, which is immeasurably faster.
This is, of course, a problem if there are multiple institutional
MARC codes in use for a given set of bibliographic records.
Improve error handling in marc2sre.pl when bib ID is not found
If we can't find a bibliographic record ID to use in our load, then
skip that MFHD record and move on to the next one. Using the counter
gives sites a chance to identify which record caused the problem.
Aside: bitmap index scans for leading '%' LIKE searches make the
--bibfield / --bibsubfield extremely slow in large datasets. If
at all possible, avoid this path!