LP#2007880: fix open-ils.actor.ou_setting.ancestor_default This patch fixes a regression introduced by bug 2006749 that prevented open-ils.actor.ou_setting.ancestor_default from retrieving the value of a library setting that does not have a view permission associated with it. It also fixes a similar issue with open-ils.actor.org_unit.settings.history.retrieve. To test ------- [1] Use srfsh to retrieve the value of a library setting that does not have a view permission. E.g., request open-ils.actor open-ils.actor.ou_setting.ancestor_default 4, "circ.grace.extend" Note that an error is returned. [2] Apply the patch and repeat step 1. This time, the value of the setting should be returned. [3] Verify that viewing the edit history of a setting in the Library Settings admin page works as expected. Signed-off-by: Galen Charlton <gmc@equinoxOLI.org> Signed-off-by: Jason Stephenson <jason@sigio.com>
LP2006749: Fix second call to ou_ancestor_setting_perm_check in AppUtils.pm Signed-off-by: Chris Sharp <csharp@georgialibraries.org> Signed-off-by: Jason Stephenson <jason@sigio.com> Signed-off-by: Galen Charlton <gmc@equinoxOLI.org>
LP#2006749: Fix call to ou_ancestor_setting_perm_check in AppUtils.pm The $self and $e arguments are missing when the ou_ancestor_setting subroutine calls ou_ancestor_setting_perm_check in AppUtils. The $coust argument also need to be $coust->view_perm->code for the allowed check in ou_ancestor_setting_perm_check. This commit corrects the call to ou_ancestor_setting_perm_check. Signed-off-by: Jason Stephenson <jason@sigio.com> Signed-off-by: Chris Sharp <csharp@georgialibraries.org> Signed-off-by: Galen Charlton <gmc@equinoxOLI.org>
lp1839341 Port Org Setting Editor UI - Speedy Retrieval for display all Org Unit Settings (~6 seconds instead of DOJO's 20) - Implement org_unit.settings.history.retrieve API Call utilizing CSTORE operations - View and revert OU settings to specific changes - Update Org Unit Setting context orgs and values - Filtering of Org Unit Settings by string found in name, description, label, and/or group fields of Org Unit settings - Get history in properly descending order based on date_applied field - Strip surrounding quotes from new values in history log - Add columns for Edit and History actions. - Add sql changes to support workstation setting for org unit settings grid - Port Import/Export Dialog for batch-modifying settings using a JSON string. Signed-off-by: Kyle Huckins <khuckins@catalyte.io> Changes to be committed: modified: Open-ILS/src/eg2/src/app/staff/admin/local/admin-local-splash.component.html new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/edit-org-unit-setting-dialog.component.html new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/edit-org-unit-setting-dialog.component.ts new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-setting-history-dialog.component.html new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-setting-history-dialog.component.ts new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-setting-json-dialog.component.html new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-setting-json-dialog.component.ts new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-settings-routing.module.ts new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-settings.component.html new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-settings.component.ts new file: Open-ILS/src/eg2/src/app/staff/admin/local/org-unit-settings/org-unit-settings.module.ts modified: Open-ILS/src/eg2/src/app/staff/admin/local/routing.module.ts modified: Open-ILS/src/perlmods/lib/OpenILS/Application/Actor.pm modified: Open-ILS/src/perlmods/lib/OpenILS/Application/AppUtils.pm modified: Open-ILS/src/sql/Pg/950.data.seed-values.sql new file: Open-ILS/src/sql/Pg/upgrade/XXXX.data.ouSettings-grid-ws-settings.sql Signed-off-by: Jane Sandberg <sandbej@linnbenton.edu> Signed-off-by: Bill Erickson <berickxx@gmail.com> Signed-off-by: Terran McCanna <tmccanna@georgialibraries.org> Signed-off-by: Jane Sandberg <sandbergja@gmail.com>
LP#1842297: Implements patron sign-on to the OpenAthens service. For libraries who are OpenAthens customers, they can configure Evergreen to sign their patrons on to OpenAthens either immediately when they sign on to Evergreen, or on demand when they select their library as their method to sign on to OpenAthens-protected resources. Signed-off-by: oajulianclementson <51331324+oajulianclementson@users.noreply.github.com> Signed-off-by: Jane Sandberg <js7389@princeton.edu>
LP1930747: Add MARC_NAMESPACE to Const.pm Now that we have 3 separate $MARC_NAMESPACE definitions it's time to just move it into Const.pm and call it done. Signed-off-by: Jason Boyer <JBoyer@equinoxOLI.org> Signed-off-by: Jason Stephenson <jason@sigio.com>
LP1908614: Show the Age Hold Protection name in the staff catalog Signed-off-by: Jason Boyer <JBoyer@equinoxOLI.org> Signed-off-by: Michele Morgan <mmorgan@noblenet.org> Signed-off-by: Bill Erickson <berickxx@gmail.com>
LP1928359 Add item circ info to Item Table Adds "Total Circ Count" and "Last Circ Date" to the staff catalog Item Table grid view. Signed-off-by: Bill Erickson <berickxx@gmail.com> Signed-off-by: Shula Link <slink@gchrl.org> Signed-off-by: Galen Charlton <gmc@equinoxOLI.org>
lp1905028 lost items and price versus acq cost This feature adds two new library settings: Use Item Price or Cost as Primary Item Value Use Item Price or Cost as Backup Item Value which intersect the behavior of these existing settings: Charge lost on zero Default Item Price Minimum Item Price Maximum Item Price Each of these settings affect how item price is used in various contexts and is not limited to "lost" items, but can affect notices, fine rules, and billings for long overdue and damaged items (as well as lost items). By default, the price field on items is the only field considered by these various uses, but if we set, for example, "Use Item Price or Cost as Primary Item Value" to "cost", then we'll use the cost field instead of the price field. Alternately, if we set the "Backup Item Value" to "cost" and either leave the "Primary Item Value" setting unset or set to "price", then we'll consider the price field first, and if it is either unset/null or equal to 0 (and "Charge lost on zero" is true), then it'll fall-through to the cost field. We can also flip the behavior with these settings and consider cost first and then price second. Signed-off-by: Jason Etheridge <jason@EquinoxOLI.org> Signed-off-by: Garry Collum <gcollum@gmail.com> Signed-off-by: Mike Rylander <mrylander@gmail.com>
LP1881607 Angular catalog e-resource links display Display electronic resource links (MARC 856's) in the Angular staff catalog. The extraction logic, which matches the TPAC, has been put into its own API. To test in concerto, navigate to: /eg2/staff/catalog/record/208 Signed-off-by: Bill Erickson <berickxx@gmail.com> Signed-off-by: Elaine Hardy <ehardy@georgialibraries.org> Signed-off-by: Jane Sandberg <sandbej@linnbenton.edu>
LP1895660: AppUtil.pm substr outside of string unique_unnested_numbers expects a list of pg arrays, but if its given an empty results list it tries to remove { and } from an undefined value. Signed-off-by: Jason Boyer <JBoyer@equinoxinitiative.org> Signed-off-by: Jane Sandberg <sandbej@linnbenton.edu> Signed-off-by: Galen Charlton <gmc@equinoxOLI.org>
LP1853006 TPAC: add limit to available option to item table This patch adds a new control to the item table in the TPAC public catalog only to specify that only items that are available should be displayed. Signed-off-by: Zavier Banks <zbanks@catalyte.io> Signed-off-by: Michele Morgan <mmorgan@noblenet.org> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org>
LP#1815815: Library Groups This branch implements Library Groups (what used to be called "lassos") for Evergreen. Evergreen has, internally, a concept called "lassos" that allows an administrator to define a group of org units to search that has no relation to the hierarchical org tree. For instance, one might create a group of law or science libraries within a university consortium, or group all school libraries together. In addition to the previous always-visible type of Library Group (lasso), one can now make them context-aware so that that only show up if the current search location is included as one of the org units in the Library Group. This is implemented without regard to the org unit hierarchy, and so requires that the relevant ancestor and descendent org units be included in the group along with those that actually hold copies, but allows for complete flexibility in context-aware Library Group configuration. Signed-off-by: Mike Rylander <mrylander@gmail.com> Signed-off-by: Ruth Frasur <rfrasur@library.in.gov> Signed-off-by: Terran McCanna <tmccanna@georgialibraries.org> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org>
LP1795469: Opac holdings sort now considers CN suffix To test: 1) Create a bunch of Call Number suffixes in Administration -> Server Administration -> Call Number Suffixes. 2) Go to a bib record, and add a bunch of holdings, all with the same call number label, owning/circ libraries, item numbers, and parts (if using parts) but with different barcodes and CN suffixes. 3) Look at the OPAC view of these holdings. Note that they are in order by barcode, without any consideration for the CN suffix. 4) Apply this commit. 5) Look at the OPAC view again. Note that they are now sorted by CN suffix, and then by barcode. Signed-off-by: Jane Sandberg <sandbej@linnbenton.edu> Signed-off-by: Josh Stompro <stompro@stompro.org> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org>
LP#1836963: reduce the cost of utility functions, speeding up search For large org trees, some several seconds are spent testing org visibility. The immediate cause is that AppUtils::get_org_tree() does not populate the process-local cache with a memcache-cached org tree. That only happens when memcache does not have a copy of the org tree. This is obviously a simple oversight, which is addressed by making sure any memcache return value is pushed into the the process local cache. Additionally, the visibility check is making heavy use of lots of indirection and delegation to utility code, when some slightly smarter code could avoid many repeated function calls. We now supply some local utility code to flesh and unflesh the parent_ou field of objects in the org tree, allowing us to avoid using find_org() and instead just calling parent_ou() when walking "up" the tree. Signed-off-by: Mike Rylander <mrylander@gmail.com> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org>
LP#1676608: copy alert and suppression matrix The Copy Alerts feature allows library staff to add customized alert messages to copies. The copy alerts will appear when a specific event takes place, such as when the copy is checked in, checked out, or renewed. Alerts can be temporary or persistent: temporary alerts will be disabled after the initial alert and acknowledgement from staff, while persistent alerts will display each time the alert event takes place. Copy Alerts can be configured to display at the circulating or owning library only or, alternatively, when the library at which the alert event takes place is not the circulating or owning library. Copy Alerts can also be configured to provide options for the next copy status that should be applied to an item. Library administrators will have the ability to create and customize Copy Alert Types and to suppress copy alerts at specific org units. Copy alerts can be added via the volume/creator and the check in, check out, and renew pages. Copy alerts can also be managed at the item status page. Copy alert types can be managed via the Copy Alert Types page in Local Administration, and suppression of them can be adminstered via the Copy Alert Suppression page under Local Administration. Co-authored-by: Galen Charlton <gmc@equinoxinitiative.org> Signed-off-by: Mike Rylander <mrylander@gmail.com> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org> Signed-off-by: Mike Rylander <mrylander@gmail.com>
LP#1527731: Allow specified join order With this commit we now support user-defined join order in cstore and friends. Previously, because the join structure of oils_sql beyond the specification of a single table was only allowed to be represented as a JSON object, it was subject to potential hash key reordering -- thanks, Perl. By supporting an intervening array layer, one can now specify the exact join order of the tables in a join tree. For example, given the following JSON object passing through a modern Perl 5 interpreter as a nested hash: {select : {acp:['id'], acn:['record'], acpl:['name'] }, from : {acp: {acn:{filter:{record:12345}}, acpl:null } } } the FROM clause of the query may end up as: FROM acp JOIN acn ON (acp.call_number = acn.id AND acn.record = 12345) JOIN acpl ON (acp.location = acpl.id) Or as: FROM acp JOIN acpl ON (acp.location = acpl.id) JOIN acn ON (acp.call_number = acn.id AND acn.record = 12345) In some situations, the join order will matter either to the semantics of the query plan, or to its performance. The following example of the newly supported syntax illustrates how to specify join order: {select : {acp:['id'], acn:['record'], acpl:['name'] }, from : {acp:[ {acn:{filter:{record:12345}}}, 'acpl' ]} } And the only FROM clause the can be generated is: FROM acp JOIN acn ON (acp.call_number = acn.id AND acn.record = 12345) JOIN acpl ON (acp.location = acpl.id) Why is this important --------------------- While Postgres' planner is very smart, a join tree with many tables may create a plan search space that is simply too large to be tested effeciently. In such cases, Postgres will do its best to find a good plan for the query using its GEQO algorithm. Often, a DBA or developer has enough understanding of the expected relative data sizes involved to give Postgres a leg up by specifying a join order that improves the planner's chances of generating an optimal plan. Signed-off-by: Mike Rylander <mrylander@gmail.com> Signed-off-by: Jason Stephenson <jason@sigio.com> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org>
LP#1698206: Eliminate Staged Search === Background Evergreen stores all data, including that useful for patron and staff search, in a normalized schema that is time and space efficient for transactional use cases, and provides guarantees on data integrity. In addition, development is made simpler than would be the case otherwise and arbitrary reporting is made possible. However, this structure is not effective for direct, SQL-only search functionality in a hierarchical, consortial dataset. This is a problem that is relatively unique to Evergreen, as it is most often employed to host and serve large consortia with overlapping bibliographic datasets and non-overlapping item and location datasets. Other search engines, including those built into other ILSs, do not generally have to account for hierarchically organized location visibility concerns as a primary use case. In other words, because it provides functionality that requires a hierarchical view of non-bibliographic data, a problem space for Evergreen is essentially nonexistent in competing products. Evergreen's search infrastructure has evolved over the years. In its current form, the software first performs a full text search against extracted bibliographic data and limits this initial internal result set to a configurable size. It then investigates the visibility of each result on several non-bibliographic axes. These visibility tests take up the preponderance of CPU time spent in search, with full text search of the bibliographic data generally completing within milliseconds. The main reason this multi-stage mechanism is used is that there are many visibility axes and attempting to join all required data sources together in a single query will cause the search use case to perform very poorly. A previous attempt to create a pure SQL search mechanism failed for this reason. A significant drawback of the current approach is that the costs imposed by visibility filtering search results using normalized non-bibliographic data, either in-query or separated from the main full-text query as it is today, make it necessary to place limits on the number of database rows matched by full-text query constructs. This in turn can cause searches to omit results in certain situations, such as a large consortium consisting of a few large libraries and many small libraries. However, it has been shown possible to overcome this performance issue by providing an extensible way to collect all visibility related information together into a small number of novel data structures with a compact in-memory representation and very fast comparison functions. In this way, we are able to use pure SQL search strategies and therefore avoid result visibility problems while also benefiting from improvements to the core PostgreSQL database engine. Further, this will open the door to indexing improvements, such as removal of the need for duplicate data storage, or the use of non-XML data storage schemes, which could reduce resource requirements and have a direct, positive effect on patron and staff search experience. === Overview of existing search logic . Construct core bibliographic search query . Collect non-bibliographic filtering criteria . Pass query and filters to a database function . Calculate hierarchical location information for visibility testing . Open cursor over core query, limited to *superpage_size * max_superpages* records . Core query implements bib-level sorting . For each result .. NEXT if not on requested superpage .. Check deleted flag, based on search type .. Check transcendence ... Return result if true .. Check for direct Located URI in scope ... Return result if exists .. Check copy status + (circ lib | owning lib) based on modifier .. Check peer bib copy status + (circ lib | owning lib) based on modifier .. Check copy location based on filter .. Check peer bib copy location based on filter .. General copy visibility checks ... If NOT staff .... Check for OPAC visible copies (trigger-maintained materialization) .... Check for peer bib OPAC visible copies ... If staff .... Confirm no copies here .... Confirm no peer bib map .... Confirm no copies anywhere .... Confirm no Located URIs elsewhere .. Return result if not excluded . Calculate summary row === Overview of new mechanism Record and copy information (everything checked in *(7)* above) is collected into a novel data structure that allows all visibility-indicating criteria to be flattened to integer arrays. This is facilitated by a database trigger in much the same way that basic OPAC copy visibility is collected for copies today. Most identifiers in Evergreen are stored as signed integers of either 32 or 64 bits. The smaller 32 bit space allows for approximately two billion positive entries, but all identifiers for table rows that are used as visibility axes fall into a range of between one and one million for all applicable use cases, and all identifiers of interest are positive. Therefore, we can make use of the most significant bits in an integer value to create a per-axis namespacing mask. When applied to the idenfitifer for a visibility axis identifier, this mask allows two values that are identical across axis to be identified as unique within a combined set of all values. Sepcifically, we retain the four most significant bits of the integer space and create from that 16 potential bitmasks for per-axis segregation of identifiers. Further, we separate copy-centered axes and bibliographic record-centered attributes into two separate columns for storage purposes, which means we can use the same four bits for different purposes within each copy or bib set. In order to implement existing visibility tests with this infrastructure, six copy axes and two record axes are used from the possible 16 from each set. See the search.calculate_visibility_attribute() for details. By using 32 bit integers we can collect all of the bitmasked values of each type (copy or bib) into a single integer array and leverage the Postgres intarray extension to test all axes at once. At search time, required and user-requested visibility restrictions are converted to *query_int* values. Results are directly filtered based on these calculated *query_int* values. This works in a way analogous to record attribute filtering, avoiding the need to test statuses, circ and owning library visibility, copy locations and location groups, copy OPAC visibility, peer bibliographic record, Located URIs, or bibliographic record sources directly. === Minimum Postgres version requirement Due to features, particularly functions, available only in 9.4 and newer that are key to the performance of the new method, Postgres 9.4 will need to be the new lowest supported version for use with Evergreen. While some of the new features and functions could be implemented as user-defined functions in PL/PGSQL, they would not be fast enough to make this pure-SQL search viable. Among the important improvements that Postgres 9.4 and newer versions bring to Evergreen are: * Version 9.4 improved GIN indexes in ways that directly benefit Evergreen, as well as how anti-joins are planned which matters for some Evergreen searches. * Version 9.5 introduced many general performance improvements, especially for joins and sorting, and brought planner improvements that impact complex queries such as those generated by this code. * Version 9.6 delivered more general performance improvements, particularly for large servers such as those that Evergreen databases tend to live on, as well as more improvements to GIN indexes, executor changes that can avoid unnecessary work in search queries, new built-in full-text phrase searching, and initial parallel query execution. === Performance The cost of the non-bibliographic filter value caching maintenance process is 10-40% faster than existing partial caching logic which it would replace. The new code achieves up to 10% faster search times than the old, suboptimal mechanism time for broad searches. The new code is faster for more selective searches, often by up to 90% faster. In both broad and narrow search cases the new mechanism performs with complete accuracy and does not miss small-collection hits in large consortia as the existing code does. Unsurprisingly, and in addition to the above improvements, performance is improved marginally as each successive Postgres version at and beyond 9.4. === Page rendering changes Previously, Evergreen would request the record details for a user-visible page of results in parallel, and then, serially, request the facet data for the result set. Now, the facet data is requested asyncronously in the background and then a single feed containing all records on a result page is requested syncronously. By parallelizing the result and facet metadata, page rendering time is cut down significantly. Concurrent requests of the same bibliographic record are shared between apache backends to reduce result request time, and by making one request instead of ten simultaineously, database load is reduced. A performance improvement of up to 20% in post-search page rendering time is seen from this change. Additionally, cross-apache caching of ancillary data, such as the coded value map and other data, via memcache significantly reduces the average page rendering time not just for result pages, but most pages generated by Evergreen. An additional performance improvement of up to 50% in post-search page rendering time is seen from this change. While these changes are not directly related to the removal staged search, they touch areas impacted by core search changes and provided enough improvement that implementing them concurrently with the elimination of staged search seemed optimal. === User visible configuration changes The stock configuration now provides an increased value for *max_superpages* in opensrf.xml. The default is now 100, and the *superpage_size* remains 1000, for a total limit of 100,000 hits per search. This is not a limit on visibility per se, as all records are visibility tested and ranked before limiting, but simply a limit on the number of pages a user could click through before reaching the end of the presented result list. === Tuning sensitivity User-level timeouts are still possible with both the old and new code, given a large enough dataset, a broad enough query, and a cold cache. However, the *gin_fuzzy_search_limit* GUC can be used to set a time cap on the new mechanism. See https://www.postgresql.org/docs/9.6/static/gin-tips.html for background, though the suggested values in the documentation are significantly lower than would be readily useful for a large Evergreen instance. Because it uses a more complex query structure, the new mechanism is somewhat more sensitive to Postgres tuning in general. In particular, lowering *random_page_cost* from the default of *4.0* to a more reasonable *2.0* is important for proper query planning. For Evergreen use cases where the search indexes and relevant tables are kept in RAM or SSDs are used for storage, this value is acceptable and useful in general. === Funding and development This project was funded by MassLNC and developed by Equinox Open Library Initiative. Signed-off-by: Mike Rylander <mrylander@gmail.com> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org> Signed-off-by: Kathy Lussier <klussier@masslnc.org> Conflicts: Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Util.pm Signed-off-by: Kathy Lussier <klussier@masslnc.org>
LP#1705524: Honor timezone of the acting library where appropriate This is a followup to the work done in bug 1485374, where we added the ability for the client to specify a timezone in which timestamps should be interpreted in business logic and the database. Most specifically, this work focuses on circulation due dates and the closed date editor. Due dates, where displayed using stock templates (including receipt templates) and used for fine calculation, are now manipulated in the library's configured timezone. This is controlled by the new 'lib.timezone' YAOUS, loaded from the server when required. Additionally, closings are recorded in the library's timezone so that so that due date calculation is more accurate. The closed date editor is also taught how to display closings in the closed library's timezone. Closed date entries also explicitly record if they are a full day closing, or a multi-day closing. This significantly simplifies the editor, and may be useful in other contexts. To accomplish this, we use the moment.js library and the moment-timezone addon. This is necessary because the stock AngularJS date filter does not understand locale-aware timezone values, which are required to support DST. A simple mapper translates the differences in format values from AngularJS date to moment.js. Of special note are a set of new filters used for formatting timestamps under certain circumstances. The new egOrgDateInContext, egOrgDate, and egDueDate filters provide the functionality, and autogrid is enhanced to make use of these where applicable. egGrid and egGridField are also taught to accept default and field-specific options for applying date filters. These filters may be useful in other or related contexts. The egDueDate filter, used for all existing displays of due date via Angular code, intentionally interprets timestamps in two different ways WRT timezone, based on the circulation duration. If the duration is day-granular (that is, the number of seconds in the duration is divisible by 86,400, or 24 hours worth of seconds) then the date is interpreted as being in the circulation library's timezone. If it is an hourly loan (any duration that does not meet the day-granular criterium) then it is instead displayed in the client's timezone, just as all other timestamps currently are, because of the work in 1485374. The OPAC is adjusted to always display the due date in the circulating library's timezone. Because the OPAC displays only the date portion of the due date field, this difference is currently considered acceptable. If this proves to be a problem in the future, a minor adjustment can be made to match the egDueDate filter logic. This work, as with 1485374 was funded by SITKA, and we thank them for their partnership in making this happen! Signed-off-by: Mike Rylander <mrylander@gmail.com> Signed-off-by: Tina Ji <tji@sitka.bclibraries.ca>
LP1700773: Add Circ Mod to Staff TPAC Add the Circ Modifier to the Record Detail page in the staff opac so users don't have to go back and forth between Holdings Maintenance as often. Signed-off-by: Jason Boyer <jboyer@library.in.gov> Signed-off-by: Josh Stompro <stomproj@larl.org> Signed-off-by: Galen Charlton <gmc@equinoxinitiative.org>