Thursday, December 16, 2010

2010/12/6-9,13-16 - Goodbye 2010!

It's been a good couple weeks at the library. I managed to get a couple things done, and try a couple new things too. First, I helped Jack setup his new laptop with the Library Csharp code I wrote when I first started at the library. Jack wants to move away from writing VB6 code, and start up with .NET, so hopefully the Csharp code helps him out. We managed to install the code's dependencies, and build a few of the applications. We'll see how it goes.

Next, I setup a spreadsheet with Google Apps to illustrate how we could begin tracking library statistics online. I think I'm the only one excited about the idea, but I'll try to get something off the ground in 2011.

Kathie at the Honors College got in touch with me. It looks like we might have our first student submit a honors thesis to our new online collection. Hopefully that comes off without a hitch.

I did a little vufind work - adding a "collection" sub-facet under the "Digital Collections" location facet. Clint and I enabled full-collection browse by simply handling an empty search string as "*:*", and it works surprising well. I went back and patched the vufind import to setup a location facet for government documents, and to index online theses and dissertations as an ETD collection under the "Digital Collections" facet. A full Voyager import to vufind is running on devcat - hopefully that will finish in the next few days. Since I was working on the import tool, I went ahead and added a "delete records" tab to the GUI version.

Finally, today I spent some time setting up a script on Yahoo Pipes that merges several library RSS feeds into a unified feed. I think Clint and Tony like the idea of publishing a unified feed for the library; we'll see what happens.

I'm looking forward to two weeks off over the holiday break. I'll be back in a couple weeks.

Friday, December 3, 2010

2010/11/29-12/2 - all the little things

It was a good week for accomplishing little things. First, Clint and I wired up webcat (our Voyager Tomcat OPAC under development by Clint and Denise) with support for Auburn-ID authentication via the AuCataloging webapp.

Next, we rolled out some patches to the production vufind server that incorporate hierarchical call-number facets, an AUBIExpress link in the record detail view, and an updated "no results" page.

I added the libsharp code to littleware's google code project. Jack wants to move away from developing VB6 code, and move toward .NET - hopefully libsharp will ease his migration.

Finally, we interviewed the last two candidates for the systems department position. They were all good candidates.

Tuesday, November 23, 2010

2010/11/15-18,22-23 - Happy Turkey Day!

It has been a good couple weeks. I've managed to stay out of trouble, and everyone is looking forward to Thanksgiving. I also managed to get a few things done. First, we imported Claudine's pamphlet collection into our DSpace repository. Jon installed the Solaris freeware xpdf package on repo, which allowed me to enable the dspace support for pdf-thumbnails, and Tony helped fix the layout problems that popped up when the thumbnails came online. I also imported the collection metadata into the vufind index on devcat with our vufind-OAI harvest tool and this xsl.

We interviewed a couple good candidates for the systems department position. The poor candidates each endured three hours of meetings, and we interviewed two candidates in a day, so it was exhausting for everyone. Two more candidates to go next week.

I checked in a couple vufind patches. One patch adds a link to a record's holdings screen that auto-populates the AUBIExpress (illiad) request form. The other patch fixes the call-number refinement in the search facets that I broke a while ago replacing AJAX facets. I think Clint will probably get the ok to release the patches next week.

Finally, I was able to setup authentication with dspace credentials for the minnows data-submission webapp. This bLog post has more details.

Sunday, November 14, 2010

2010/11/8-11 - whoops!

This week had good news and bad news - bad news first.

There was a little excitement this week as we accidentally released a bug to our live vufind server. On Monday we flipped the switch on a VM Clint had prepared for release that included a bug (that I wrote - ugh!) in the AJAX handler that marries a call-number to its proper item in a search result. The bug had gone unnoticed on our test server for over a week, and didn't lead to serious problems till Monday night, when search results started popping up with wacky call numbers. The reference desk paged systems, but the page went unnoticed till Tuesday morning (before I came in), when Denise and Jon called Clint at home on vacation. Clint quickly found the bad commit I had checked in, and reverted the bug back out. I keep expecting someone to throw a fit, but people have been cool, so maybe I'm too cynical.

The good news was that Claudine signed off on the metadata massaging that my import program runs through to marry her collection of scans with catalog MARC records for import into d-space. This project has been lingering since June - hopefully the end is in sight. I'll import the rest of the collection on Monday, and setup some initial XSL for Marliese to index the collection into vufind on devcat.

Finally, the vufind bug inspired me to setup some vufind regressions. The following e-mail excerpt explains what I did.



--- Reuben Pasquini 11/13/10 8:52 PM ---
Hi Clint!

I setup a framework for vufind regression tests that we can
add to over time.
    http://code.google.com/p/littleware/source/browse/?repo=catalog#hg/VufindRegression/src/main/scala/edu/auburn/library/tool/vufindTest

You can download a build of the test runner at:
    http://devcat.lib.auburn.edu/tests/vufindTest.zip
- just run 'vufindTest.bat'.

The test suite currently just runs through 3 sets of tests against devcat.
The first test runs a "Harry Potter" search (
    using HtmlUnit http://htmlunit.sourceforge.net/
), and verifies that the results include a couple expected records,
and that the call-numbers are correct on those 2 records.

The second test loads a specific Harry Potter record, then
checks the title,
verifies that the record has some holdings information,
and checks that the "campus delivery" cookies are set.

Finally, the third test accesses
    http://devcat.lib.auburn.edu/vufind/AuburnTestSuite.php
which runs through a suite of PHP unit tests
    http://devcat.lib.auburn.edu/cgi-bin/hgwebdir.cgi/vufind/file/tip/web/AuburnTestSuite.php

Anyway, we can start running these regressions before releasing new code.
I can set you up with a copy of the code if you want to add some tests.
I want to add tests that directly access some of the AJAX services, 
add a test fixture for Solr.php,
and setup a Hudson server (http://hudson-ci.org/) to
automatically run the tests whenever we commit some code to the vufind repository,
but I probably won't get to that for a while ...

Let me know if you run into problems with the test runner ...

Cheers,
Reuben

Sunday, November 7, 2010

2010/11/1-4 - election day!

It was another quiet week at the library. I spent some time updating the littleware build configs to simplify setting up the build for a new project. The build.xml files for each project now share a master build-template that defines some common cross-project build rules, and automatically installs ivy. I also updated the ivy.xml files to define compile, client, server, ee_client, ee_server, and test standard configurations; and to reference latest.integration dependency versions where appropriate. I hope to setup a hudson continuous integration server in the next few months to automate the test, artifact release, and server deploy processes.

I put most of the code in place that integrates the Google spread sheet data with the ACES collection d-space import tool. I hope to have a version of the collection on the server next week.

Finally, I helped with a couple small vufind bugs. Clint worked hard this week to release a few patches to the production server. There's some kind of big "vufind, love it or hate it" meeting next week. There are plenty of flaws with our vufind setup, but it has a lot of advantages over our only alternative.

Tuesday, November 2, 2010

2010/10/25-28 - Happy Halloween

I got a few nice things working this week. First, the multi-step minnows submission process is running. I scheduled a meeting for Thursday, but Jon forgot about it, so we'll try again next week.

Next, I integrated Barbara Bishop's gifs of the library maps into our devcat vufind install. I used ImageMagic to down-rez Barbara's images, and convert them to PNG, and wired up vufind to popup a YUI-2 panel to display the map. Barbara and Marliese gave some good feedback, but we haven't released the code to catalog yet - bla.

When working on the maps thing I stumbled across the fact that our "retrieve X from stacks" request system consists of a patron writing down what he/she wants and handing it to circulation. I'd like to try to setup an online system - submit a request, and get an e-mail or text when the item is at circulation kind of thing.

Finally I implemented a MultiVoyager.php vufind driver for some people on the vufind e-mail list. I sent out the following e-mail, but haven't heard back from anyone. Bla.


--- Reuben Pasquini 10/31/2010 4:57 PM ---
Hi David,

I was able to get a simple 2-ILS system working at Auburn on our dev server.
Here's what I did.

*. Setup a solrmarc import that prepends an ILS-specific prefix to each record.
    You can get something working quickly by modifying the 'id' rule in
    the solrmarc marc.properties to:

      id = script(customId.bsh),getCustomId("PREFIX")

    where customId.bsh has:

import org.marc4j.marc.Record;
import org.marc4j.marc.ControlField;


/**
 * Generate a custom id as prefix + 001
 */
public String getCustomId( Record record, String prefix ) {
    return prefix + ((ControlField) record.getVariableField( "001" )).getData();
}

.........................................

*. I wrote a 
      MultiVoyager.php
driver that reads
     MultiVoyager.ini
and maintains a list of Voyager.php driver instances.
I attached a copy of MultiVoyager.ini, or
you can clone a copy out of our Mercurial repository:
     http://catalog.lib.auburn.edu/cgi-bin/hgwebdir.cgi/vufind/ 

The MultiVoyager.ini file maps a prefix to and ILS config -
an example follows.
The "EMPTY" keyword indicates no prefix.
This example works for records that look like
     12345 (no prefix)
and 
     AUM12345 (AUM prefix).

MultiVoyager.ini:

[Catalog]
configList = EMPTY,AUM
defaultDriver = EMPTY
[EMPTY]
host        = bla
port        = 1521
service     = VGER
user        = bla
password    = bla
database    = bla
pwebrecon   = http://pooh.lib.auburn.edu:7008/vwebv/holdingsInfo 
[AUM]
host        = bla
port        = 1521
service     = VGER
user        = bla
password    = bla
database    = bla
pwebrecon   = http://pooh.lib.auburn.edu:7008/vwebv/holdingsInfo 

........................................

*. I had to patch a few other php lines here and there too
    to get the ajax load of holding status working right,
    modify Voyager.php to accept a config in the constructor
    rather than load Voyager.ini, ... - stuff like that.
    The complete patch is here, but our vufind code is
    pretty far out of date compard to vufind.org:
       http://catalog.lib.auburn.edu/cgi-bin/hgwebdir.cgi/vufind/rev/9e94166d5e0b 


........................

Hope this code works for you - let me know how it goes.

Cheers,
Reuben


-----Original Message-----
From: Reuben Pasquini [mailto:rdp0004@auburn.edu] 
Sent: Monday, October 18, 2010 1:20 PM
To: Harmon, Kelly; vufind-general@lists.sourceforge.net; Osullivan L.
Subject: Re: [VuFind-General] Anyone merging more than one
VoyagerDatabaseintoVuFind?

At Auburn we currently only index one Voyager database, but
we also OAI harvest other collections.
An easy trick is to just prepend a unique prefix before the
normal bib-id.
For example - we have 'SLEDGE143' and '143' records:
   http://catalog.lib.auburn.edu/vufind/Record/SLEDGE143 
   http://catalog.lib.auburn.edu/vufind/Record/143 

I think you'll have to hack at least 2 things to get this scheme to
work:

  *. Modify Drivers/Voyager.php to manage connections
    to multiple Oracle servers, and choose a connection
    based on the id-prefix.

  *. Modify the solrmarc vufind.properties file, and replace
          id = 001, first
    with something like
          id = my001WithPrefixFunction
    ... something like that.

I'd like to do something like this myself, but it keeps falling off
the end of my TODO list.
Let me know if you need help implementing this, and I'll set aside a 
day or two to give it a try.

Cheers,
Reuben
    
 

Sunday, October 24, 2010

2010/10/18-21 - fish data

It was a quiet week for me at the library. I spent almost all my time working on the multi-step zip-submission forms for the minnows morphology project. The code is coming along well - it's just taking some time. I'll keep at it next week.

Sunday, October 17, 2010

2010/10/04-07,11-14 - lazy blogger

I haven't kept up with this bLog the last couple weeks, but there's not much to tell anyway. I've mostly just been working on my usual assortment of projects.

  • I finished a v2v refactor moving the CLI interface over to littleware's lgo infrastructure, and adding Clint's -halt and -continue flags. Clint kicked off an import last week, and it looks like it runs like a charm.
  • I sent Jon Armbruster a link to a prototype data-submission form for the cyprinella morphology repository. After a little back and forth we added a few more requirements for the submission to support. I'll work on that next week.
  • After Claudine finishes her pass over the ACES data I think we'll be set to import that collection into the repository.
  • We met with Troy from fisheries who has a document collection he'd like us to index into vufind. It looks like that will be easy for us to do - Troy will send us an XML data file, and we'll run that through our XSLT pipeline. There was a little bit of "should we do this" goofiness that the librarians had to talk themselves through, but I was able to avoid most of that.
  • Along the same lines - there was some nervous collapse in the vufind committee, who decided it's outside their "charter" to test vufind integration with article level search. The librarians will probably form yet another committee to consider article level search.
  • I'm going to check with Tony next week about setting up podcast feeds for some of the video he's posting to vimeo. That should be easy to do, but Tony might prefer to only stream the content rather than make files available for download. We'll see.
  • I also need to touch base with Kathie Mattox at the honors college next week. It's getting close to the end of the semester - does she still plan to allow honors students to deposit into the online thesis repository ?
  • We reviewed the 20 submissions for the library programmer position, and narrowed the list down to 10 phone interviews. They're all good candidates, but I'm not excited about hiring more staff at the library.

Saturday, October 2, 2010

2010/09/27-30 - moving through jello

I spent most of this week working on a submission tool for the minnows morphology project. The following e-mail describes what I'm implementing.

... Reuben Pasquini 09/30/10 1:01 PM ...
Hi Jon,

I've been looking at how to manage submissions to the morphology repository,
and I have some ideas to bounce off you.
I'd like to implement the following process.

  *. We'll setup a custom submission form that only allows a user
    to upload a single .zip file.
    The .zip file may contain an arbitrary number of .tps files,
    and the .jpg images that go with them.

  *. A user uploads the zip file, and fills out a form of metadata.

  *. The server automatically unzips the zip file,
    and breaks it down into individual items like
       http://131.204.172.126:8080/minnows/handle/123456789/10 
   , and submits the items for review after the user
   verifies that everything looks ok.

  *. The reviewer approves or rejects each individual item.     

That's the direction I'm working in now - sound ok ?

Cheers,
Reuben

BTW - If you want to allow excel spread-sheeting in addition to .tps files, 
    then send me instructions on how the data in a spread sheet
    maps to the data in a .tps file, and I'll code that in too.

I felt like I was moving through jello this week - writing code and doing little things, but nothing released to users. I did contribute (see below) to an e-mail exchange about testing article-level search in our vufind catalog.


... Reuben Pasquini 09/30/10 10:36 AM ...
Hi Nancy!

Thanks for taking the time to write down your thoughts - you have
great ideas.  I'll help stir the pot a little - see if we can get her to boil!

*. Your argument about wanting students
           "
             to think more critically about
             where and what they're searching for information
            "
      made me think that I actually want the opposite.
      I don't want to think at all - just give me what I want.

*. Your point about the apparent overlap between the vufind, voyager-2, 
    and EBSCO efforts is well taken.
    It makes sense to have vufind, voyager-2, and WAG
    under one umbrella.
    I don't think it's a big problem though.
    Clint or Tony are in every meeting, so they keep things connected.
    I think they like meetings.

* Google Scholar:
      http://scholar.google.com/intl/en/scholar/about.html 
   "
Google Scholar aims to rank documents the way researchers do, weighing the full text of each document, where it was published, who it was written by, as well as how often and how recently it has been cited in other scholarly literature.
   "
    Sounds awesome.

*. "Are we using our technical and human resources correctly?"
    Are you having doubts only now ?
    You probably still leave cookies out for Santa ...


Cheers,
Reuben

... "Nancy Noe"  9/29/2010 8:56 PM ...
Hi Marliese

Working tonight at the Village.  I've been able to play around with the
Villanova site a bit, but haven't given it the time or consideration it
deserves.  These are my initial impressions.  In the spirit of
full-disclosure, I should probably say that my comments come from a very
specific information literacy philosophy.

One of the concerns I have with this kind of searching is the same issue
I have with federated searching.  While I understand the 'google-like'
feel of such a search, I want students to think more critically about
where and what they're searching for information.  There was a recent
discussion on the  instruction list serv - are people teaching federated
searching/discovery services? While a couple of people said 'yes', the
majority replied in the negative.   Again, it comes back to helping
students to learn to move away from the 'everything possible' approach,
to one that asks them to consider what disciple/subject specific
resources might provide them with higher quality academic/scholarly
material.  I want them to think critically about their information need
before they start typing in poorly constructed, broad-based keyword
searches, especially when looking for articles.   For instance, I taught
an honors ENGL1127 today, working on literacy issues.  Within the EBSCO
suite, I was able to have them select ASP and three education related
databases.  That's the kind of thing I'm going to teach in class. 
That's where I'm going to fall on the issue.

Technically, I found that in the searches I conducted, there were a lot
of 'clicks' one had to go through before you got to the actual article. 
In two instances, there was a link to full-text that failed (and that
was prior to asking for log-in information.)

Vendors are creating systems that in a way compete with Google Scholar,
yet I suppose there's a 'cost' for these systems.  Perhaps I don't fully
understand, but won't they be charging us for the privilege of using
metadata when we already pay for access to the info through our database
subscriptions?  Does this give us permanent access to the articles? 
Faculty tell me that they now use Google Scholar as the starting point
for their research.   Many schools have a link to Google Sciholar off
their library homepage.  Would this serve us just as well?   I may not
be as informed as I need to be on cost and ownership issues.

Should the VuFind cmte investigate?  Maybe.  We're still working on
trying to make the catalog work (advanced searching?) and trying to
determine if we should recommend as primary.  To be honest, the past
couple of weeks I've found myself reverting back to the 'classic'
catalog to answer journal title questions.  I haven't settled on
questions I have about VuFind as a catalog.   I'm not saying that we
shouldn't be exploring new systems/new technologies.  We should.  Is
this the right cmte to take this forward?  We have the new voyager cmte,
and the EDS cmte, and the vufind cmte?  Are we using our technical and
human resources correctly?  Is there a way to better optimize time and
talents, especially since it seems that all of these are converging.

So, those are my two cents this evening.  Again, I'll miss the meeting
tomorrow.

Thanks

Nancy

... "Marliese Thomas"  09/29/10 2:52 PM ...
Hi everyone, 

Here's some background about the opportunity I mentioned at our last
meeting.

We've been talking for a while about taking VuFind to the next level by
including article-level metadata in it.  Indeed, at our August 9th
meeting, Bonnie volunteered to contact vendors about getting a batch of
article-level metadata that we could run through Vufind in order to see
how this might work.

There have been a couple of new developments since that meeting. 
First, Bonnie, Marcia, Aaron, and I met with Jane Burke and Mary Miller
of Serials Solutions a couple of weeks ago.  In the course of that
meeting, Jane told us that Serials Solutions has developed a programming
interface that allows other discovery services, including VuFind, to
search Summon and display results from it.  This functionality allows
VuFind to do article-level searches in Summon without the host library
having to ingest, store, and index large article-level metadata sets
provided by vendors.

To see how the VuFind-Summon combination works in practice, go to
Villanova's VuFind catalog at:

https://library.villanova.edu/Find/Search/Home 

Do a search.  In addition to the familiar-looking VuFind results,
you'll see a box in the upper right-hand corner called "Top results from
Articles & more", with links.  "Articles & more" = Summon.  One example:
https://library.villanova.edu/Find/Search/Results?lookfor=cat&type=&search=catalog+AllFields&submit=Find 


The day after we met with the Summon folks, EBSCO announced that EBSCO
Discovery Service has a similar functionality.  Since we're a full-level
development partner for EDS, we can test this feature with VuFind at no
cost to us.  (Of course, we'd have to buy EDS, or Summon, or a similar
product, in order to offer article-level content to our users as part of
our regular services.) Since then, I have also heard that Ex Libris is
offering this same functionality for their discovery product, Primo
Central. Obviously, this is becoming a common and viable way for
libraries to leverage use of their software contracts.

I believe it would be worth exploring how VuFind works with a
full-featured commercial discovery service like EDS.  In fact, Bonnie
gave Aaron and me the go-ahead to take a shot at getting a VuFind-EDS
test installation working in time for my presentation this weekend at
LITA.  I'd be interested in hearing your thoughts on these developments
at our meeting tomorrow, or when you have had a chance to explore the
Villanova catalog.

Thanks,

Marliese



Marliese Thomas
Database Enhancement Librarian
RBD Library
Auburn University
 
231 Mell Street
Auburn, AL 36849
 
334.844.8171
mst0001@auburn.edu 

Finally, Thursday was Penny's last day. Enjoy your retirement, Penny!

Sunday, September 26, 2010

2010/09/20-23 - ouch!

I sprained my ankle Wednesday morning when I planted my foot on some uneven ground as my three dogs took after a rabbit, so I spent Thursday at home on the couch, but it was still a productive week. Monday morning I presented the OAI-to-vufind tool I've been working on to several people. Marliese will now take care of maintaining our Content-DM digital library collections in Vufind. Liza heard about the OAI tool on Tuesday, and we were able to harvest records into Vufind from the NASA server she's interested in. She'll get back to me eventually to let me know if she wants to begin setting up some harvests of government documents into our Vufind install or not. I'll need to enable command-line options for OAI harvest, so we can setup cron jobs to automatically harvest new records every week or so.

Otherwise I did various small things this week as usual: patched a bug in Vufind to work aound some bad data in Serial Solution's link-resolver, setup a test for EBSCO's article-level search web services for Marliese, worked through some more of the ACES collection spread sheet, argued with Aaron over Institutional Repository plans, and did a little work on for the Minnows morphology project.

Thursday, September 16, 2010

2010/09/13-16 - fish stories

A lot of small things were going on in parallel this week. I spent some time testing the OAI-to-Vufind tool. I exchanged a few e-mails to help a guy at Purdue giving OAI2Vufind a try. Hope it works for him!

We met with Jon Armbruster on Wednesday to discuss the Minnows Project. I really like Google sites - we'll see if that project page helps us stay organized or not.

Finally, I finished my first pass over my chunk of Claudine's ACES-collection metadata spread sheet. The data is better than I thought it would be. We should be in good shape to import Claudine's collection into d-space with some Voyager metadata next month.

I also got an e-mail from the Tuskegee archive guys. Their DSpace repository is online (or it was yesterday) - looks good!

Thursday, September 9, 2010

2010/09/07-09 - happy labor day

It was a short week for me at the library, but I managed to get a couple things done. First, we setup a spread sheet in Google docs for the ACES collection project where we can manually check that each scan is matched with the right MARC record. I have some code that tried to match scan to MARC by title, but it had a lot of misses that we'll clean up with the spread sheet.

I was also able to use the vufind-import tool to OAI harvest records from ContentDM into vufind's Solr index. I wrote up the details here.

Sunday, September 5, 2010

2010/08/23-26,08/30-09/02 - chugging away

I was absent minded the last couple weeks, and completely forgot to update my bLog. Looking back through my e-mail it looks like I spent most of my time on a few things. First, I helped Clint implement a few vufind patches before he kicked off a new full copy from voyager to vufind.

On Thursday 08/26 Aaron, Bonnie, Midge, and I met with Drew and Iryana from the assessment office to further discuss their plans to implement digital measures to standardize the faculty annual review and assessment process. Auburn University will require her faculty to submit material for annual review to our digital measures server from which department heads, deans, and university management may generate reports and metrics for teaching and research performance for individual faculty on up to the university level.

At the meeting we discussed how assessment's digital measures project might dovetail with the library's nascent plans to implement an institutional repository through which faculty may publish research artifacts for open access by the general public. It turns out that the university's new guidelines for university funded research requests that faculty make their funded research available for open access either through Auburn's own "scholarly repository" (which the library supposedly is developing) or some other online repository.

Anyway, the digital measures project and the open access clause in the funding guidelines have combined to make Bonnie and Aaron very excited that the library quickly establish a repository for faculty research. Some library staff will join Drew and Iryana's group of digital measures beta testers, and I'll setup some collections in our DSpace server on repo that will be ready to accept faculty research papers.

Although I spent a lot of words describing the digital measures effort above, I actually spent much more time over the last two weeks on the tool to marry Claudine's scans of ACES documents with the appropriate MARC records in Voyager from which she wants to pull metadata. Each metadata file from the scanner specifies a title that includes series information like, "Bulletin No. 5 of the Alabama Soils Bla bla: Peanut Performance in Alabama" while the matching MARC record stores the "Peanut Performance in Alabama" part as a title, and stores the series information in other parts of the MARC record. Anyway, from the start I should have just used Lucene or tapped into the Sorl server that backs our Vufind install at http://catalog.lib.auburn.edu, but I thought I already had the set of 1200 or so MARC records to draw from, so I wound up implementing my own simple title-search algorithm. Everything looked good for my set of 4 test records, but when I ran a sanity check against the full set of over 1000 scans, it turned out that many of the MARC records I had pulled based on an earlier exchange with Claudine were not needed, and many needed MARC records were missing. I eventually collected a larger set of MARC records to draw from, but the amount of data that I'm working with now is large enough that my simple title-searcher takes a long time to run over the full record set. Next week I'll run some more tests over a subset of the data, then run the full data set into a Google docs spread sheet, and ask Claudine to go through the title matches, and manually correct the MARC bib-id's for titles that the code does not correctly match. Once we have that spread sheet, I'll feed it to the code I have to generate the dublin core for import into d-space. Anyway - the code is cool and fun, but it's freaking frustrating to trip over so many problems for this stupid little project that could have been done by hand by now.

Finally, I made some good progress on the minnows morphology database project we're working on with Professor Jon Ambruster in biology. I read over an introduction to "geometric morphometrics", and I think I came up with a good way of tricking our Minnows DSpace server to store morphometric data in a dublin core field attached to an item record for a minnow sample's reference photograph. We'll see what Jon thinks next week.

Saturday, August 21, 2010

2010/08/18-19 - goodbye cataloging, hello serials & monographs

The staff meeting Wednesday morning went very well. Aaron, Paula, and Helen presented their plans to merge the acquisitions and cataloging groups into a single unit. Those of us in cataloging will move to new cubes in the acquisitions area of the building by the end of the year. The new unit will be managed as two teams - a monographs team will report to Helen, and Paula will lead the serials and electronic resources team. We all expect our monographs work will continue to shrink over time, but combining acquisitions and cataloging is a good step in streamlining library technical services.

Otherwise I've spent most of the last couple days working on a few patches to the vufind tools. I spent most of the time tracking down the reason why our vufind server does not properly filter connecting diacritics so that a search for vtoriaa finds Russian language records like this. I eventually discovered that vufind's underlying Solr server has a filter system that handles that kind of thing, and that the unicode filter that indexes versions of unicode tokens without diacritics was not properly handling combining halfmark characters. I built an AuUnicodeFilter copy of the unicode filter that just adds a switch-statement block that checks for the combining half marks; problem solved!

Tuesday, August 17, 2010

2010/08/16-17 - patching code

I've spent the last several days on a series of small tasks that added up to take all my time. I checked in several patches to small bugs in the AuCataloging tools, worked on the ACES-project metadata import tool, setup a d-space collection on repo for a professor in computer science to experiment with, installed a valid SSL certificate on repo, .... stuff like that.

Tomorrow there's a big meeting for everyone in the systems, acquisitions, and cataloging groups where Aaron will present some vision of the future of technical services. I'm looking forward to it - technical services can use an overhaul. The cataloging group shrank a lot over the last year with Harriet retiring, Henry's retirement and death, Tom's unexpected death, and Lori leaving for a new job. Then a few weeks ago three women in cataloging were told out of the blue that they would begin reporting to the circulation group. The move was perceived by staff as a ham handed decree from library leadership, and one of the transferred women was angry enough to go ahead and retire rather than accept the change. I'm sure more personnel moves are in store in the near future considering the libraries' budget constraints, but a meeting to present and exchange ideas with staff will help avoid bad feelings.

Tuesday, August 10, 2010

2010/08/09-10 tweaking vufind

I'm finally back at the library after three weeks away. Three weeks sounds like a long time, but it went fast.

I spent most of today and yesterday experimenting with the Solr index that backs our vufind catalog. Last week the library finally decided to make vufind our default online catalog rather than the older Aubiecat Voyager OPAC.

A few small vufind bugs have popped up that we can deal with, but one big problem we've had is that vufind's Solr server would periodically run out of memory, and require a restart. We've tried several things over the past week (I lent some help from home last week) including moving Solr to a 64 bit Solr and allocating a 5 GB heap, and we also tweaked the Solr cache configuration, but the memory problem persisted.

Last night Clint noticed that the Solr memory use spiked when he ran a title-sort on a search result, so we've been looking at sorting since then. It turns out that Solr's lucene index engine uses a "field cache" to implement sorting, and the cache size is proportional to the number of unique entries in the sort field and the size of each entry. Our catalog has over three million unique titles, so it's very expensive to process a title search. I experimented with a solution that just post-sorts the first part of a relevance ordered search outside Solr, but Clint decided to just disable the title-sort as it's not a critical feature. Problem solved - hopefully!

Thursday, July 15, 2010

2010/07/13-15 - ACES Voyager merge pain

I've been working on the tool to decorate the dublin-core for each pdf in the ACES scanning project with metadata from our Voyager catalog. I've got code running that extracts each record's metadata from the mets files the scanning contractor delivered, and also have the list of Voyager ids, so now I need a strategy to match A with B, and glue it all together.

I'm making the best of this ACES project by taking the opportunity to update some shared code and trying some new things, but I believe that the index of the pdf's full-text makes the meta-data from the catalog superfluous for discovery - especially after Google and Bing scan our sever. The librarians refuse to believe that though!

Tuesday, July 13, 2010

2010/07/08,12 - Honors Theses are Go!

I think the new repo server is nearly ready to start accepting Honors College theses this fall. I updated the site to include links to Midge's final liability release form; submitted a couple test records; spoke with Tony about the web design; sent the following e-mail to Kathie at the Honors College, and finally submitted a purchase order for a new SSL certificate for the server. Tony had the good idea to setup a rule that routes http://repo.lib.auburn.edu/honors/ to the collection too.


--- Reuben Pasquini 7/8/2010 4:58 PM ---
Hi Kathie!

I just want to give you an update on the 
online collection for honors theses.
I think we're on track for a fall release.

*. The attached Word document has the final legal
    disclaimer approved by the lawyers.
    The form is also online:
        http://repo.lib.auburn.edu/HonorsLiabilityForm.pdf 
    and
        http://repo.lib.auburn.edu/HonorsLiabilityForm.doc 
   Let us know if you have questions or see problems.
    
*. The theses collection is at:
         http://repo.lib.auburn.edu/honors/ 
    You can add that link to the Honors College web site
    once we go live.
    We'll remove the 2 test records before then.

*. The library web designer, Tony Oravet, plans to update the
    site design in the next month.
    Please let us know if you have any requests.

*. I'll be out of town July 19 until August 9.
    Can we plan to meet sometime the week of August 9
    to review the site, and to give you some training
    on how to add a student to the system, and how
    to review a student's submission ?
    Monday to Thursday 
    starting 9:30am to 2:00pm is best for me ...

I think that's all I wanted to report.
What do you think ?

Cheers,
Reuben

Wednesday, July 7, 2010

2010/07/06-07 - vufind redux

I spent most of my time the last couple days patching a couple search related bugs that pop up in corner cases - one with "author" search and another with very long search strings. I think the patches are ok, but there's no regression test suite, so we'll see.

I need to get back to work on the AU-repo ACES import and configuration for the Honors College collection. I made a few small changes yesterday, but I'll do more tomorrow.

Friday, July 2, 2010

06/29-30,07/01/2010 - respect cataloging!

Over the last few days I managed to stage Claudine's ACES collection scans on our server, configure an XSL transform to translate each scan's title, author, date metadata file to the format Dspace wants for bulk import. I imported some records into a test collection , and sent an e-mail out to let several people take a look. The librarians want to enrich the Dspace records with metadata from our Voyager catalog. For example, this Dspace record corresponds to this Voyager record. I pointed out that the DSpace server indexes the OCR full-text of each record's PDF content, so the extra metadata won't help much with discovery. Librarians don't like it when you say things like that!

I spent most of Thursday preparing a patch that fixed a bug Julie at the graduate school found in the ETD proquest-export tool.

Monday, June 28, 2010

2010/06/24,28 - repository work

The last couple days at the library involved work on a couple repository collections. I spent most of Thursday installing DSpace on our "minnows" server. Today Claudine brought a USB-drive to my desk full of the scans of her ACES collection. I began copying the data onto my workstation, and dug up the information on how to batch-import the pdf's and metadata into our repository.

Wednesday, June 23, 2010

2010/06/21-23 - time for fish

Over the last few days I finished up the AuCataloging port to the new bootstrap code, split clickTracker, catRequest, and refStats code each out to its own build project, and released a new vufind tool that includes "id-file" support. I'm having fun working on the AuCataloging code, but I'll have to pull myself away tomorrow to install d-space on the minnows-project test server running in a VM on Clint's machine.

Wednesday, June 16, 2010

2010/06/14-16 - back to zero

I've spent most of my time the last few days porting the AuCataloging code base up to the new littleware 2.1 module system. I've finally finished all the individual tools for ETD and VuFind, but I still need to update the AuCataloging webapp. Hopefully that won't be too painful.

Once I get everything back to zero, then I'll go back and add the "read bib-ids from a file" functionality to the Vufind import tool, and also take a look at extending the data-model to save form values to persistent files, so an application starts up with configuration values saved during its last execution.

Wednesday, June 9, 2010

2010/06/07-09 - Humpty Dumpty

Jack has been out this week. He fell over the weekend, and it sounds like he banged himself up pretty good. Hopefully he'll feel good enough to come in tomorrow.

I've spent most of the last few days coding and testing littleware's new bootstrap-module system. I'm very happy with how the code has come together, but it will be at least another week before I finish porting the existing AuCataloging codebase.

Clint finished a full Voyager import into VuFind only to realize at the end that some metadata was not handled correctly. When we looked into it we realized that we had not erased the old solrmarc properties files, so the importer ran with those files rather than the new configuration we had tested. Clint is going to just redo the import. It sucks to have to babysit the import for another week, but we'll know better next time. I'm going to extend the import tool to accept a file full of bib-ids to import, so it will be easy in the future for us to import subsets of our Voyager catalog.

Friday, June 4, 2010

2010/06/01-02 - short week at the library

It was a short week for me at the library as we had Monday off for Memorial Day and I tool Thursday off to spend time with visiting relatives. I spent most of my time writing code and fixing bugs, but the big event of the week was an afternoon meeting with Professor Jon Armbruster who just won an NSF grant to establish an online database of minnow morphology. The library will develop and administer the database as a digital collection. We'll see how it goes.

Saturday, May 29, 2010

2010/05/29 - somebody reads this bLog!

I just sent the following e-mail to the vufind-general list. It turns out somebody actually reads this bLog - awesome!

Hello!

I didn't know anyone actually read my bLog!
Anyway - we run d-space at Auburn for ETD
     http://etd.auburn.edu
and we're working on a more general repository server too.
We have a vufind server that's still beta:
     http://catalog.lib.auburn.edu
We'll hopefully work out some final issues with
our index this summer.

We have a vufind-import tool that builds on top
of SolrMarc to manage MARC record imports
   (Voyager Oracle database -> SolrMARC -> SOLR),
but also supports ( XML -> XSL -> SOLR )
and (OAI -> XSL -> SOLR) pipelines as of a few
weeks ago.
   http://lib.auburn.edu/AuCataloging/jar/vygr2vfnd.jnlp
I'll eventually update the apps on the Google code site too -
     http://code.google.com/p/littleware/
I'm still working on a few patches.
We hope to eventually have a set of automatic cron-jobs
in place that run our import tool to nightly
OAI-harvest collections we want to index in vufind.

Anyway - the tools XML and OAI import tabs rely upon you to 
write your own XSL that converts your XML import records
into SOLR <doc></doc> blocks that conform to your vufind index's
schema and include data that the Vufind display engine can handle.
The bLog post includes the XSL file we applied at Auburn
to harvest a collection from Georgia Tech's repository:
     http://au-reuben.blogspot.com/2010/05/20100512-14-oai-harvest-to-vufind.html
One note - be sure to configure your XSL so that your import assigns
unique ids to each collection.
For example - at Auburn we use numbers (1,2,3,...) for our
Voyager records (from the 001), 
then have different prefixes for our Content-DM collections -
so the "Carline Dean" collection from Content-DM as a 'CF' prefix:
(CF1, CF2, ...
   ex: http://catalog.lib.auburn.edu/vufind/Record/CF377
).

The XSL import process for us at Auburn may be simpler than for
other vufind installs.
We forked our vufind codebase off vufind.org last year:
   http://catalog.lib.auburn.edu/cgi-bin/hgwebdir.cgi
, and customized our vufind engine to remove dependencies on MARC
that existed at the time.
Every once in a while we go back and merge in code from vufind.org
(like the great mobile them
        http://catalog.lib.auburn.edu/mobile/
), or at least steal ideas (disable AJAX facet loading).
I understand the vufind RC2 code now supports a "Driver" framework
that allows vufind to deal with non-MARC data in the index,
but we haven't merged that code in, and I don't know if anyone 
takes advantage of that or not.  I think most vufind installs just convert
XML metadata to MARC then run the MARC through SolrMarc to
get non-MARC collections into the index.  I can't stand MARC.

Anyway - feel free to give our import tool a try.
The two big challenges you'll need to work through are
figuring out the XSL, and hacking your vufind install's PHP or driver logic
to deal with the non-MARC data that the tool puts into your index.
Of course you'll also need to enable d-space's OAI-harvest webapp
if you're not already running that on your d-space repository.

Good luck!

Cheers,
Reuben


>>> Eoghan Ó Carragáin <eoghan.ocarragain@gmail.com> 05/28/10 1:48 PM >>>
Hi Paolo,
I can't think of anyone using DSpace & VuFind, but I may be wrong. From
memory, Colarado Statue University use Vufind to expose their Digital
Repository (digitool from Exlibris), Auburn University use it with OCLC's
ContentDM repository (we do this at the National Library of Ireland too, but
it isn't live yet), & Greg Pendelbury in USQ is working with Vufind &
Fedora/Fascinator. Reuben frorm Auburn University recently posted a blog
about his work with harvesting metadata into Vufind using OAI-PMH:
http://au-reuben.blogspot.com/2010/05/20100512-14-oai-harvest-to-vufind.html

Are there any specific areas you need help with?

All the best,
Eoghan

On 28 May 2010 16:20, Baglioni, Paolo <Paolo.Baglioni@eui.eu> wrote:

>  Hi All,
>
> Has anybody configured VuFind to search their DSpace repositories? If so,
> any hints on how to go about this?
>
> Many Thanks
>
> Paolo
>
> =========================
>
> Paolo Baglioni
>
> Library Systems Analyst
>
> European University Institute
>
> =========================
>
>
>
>
>
>
>
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> VuFind-General mailing list
> VuFind-General@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/vufind-general
>
>

Thursday, May 27, 2010

2010/05/24-27 - Honor's College and Tuskegee

It was a good week in cataloging at the library. Midge and I met with Kathie Mattox from the Honors College on Tuesday. The meeting went very well. We worked out the following procedures to manage the online Honors College thesis collection.

  • Student procedure
    1. Check thesis draft with Kathie at HC. Kathie gives the student a copy of the library disclaimer required to eventually publish the thesis to the library online server.
    2. Student finishes thesis, gets signatures on the various forms, makes three copies, and delivers the copies and forms back to Kathie at HC.
    3. Student posts a pdf-version of the final thesis to the library server.
  • For Kathie
    1. At some point before a student can submit his/her thesis pdf to the server, Kathie must enable that student's account on the server by filling out a web form. Reuben will give Kathy some training on how to do this, and write up instructions.
    2. Kathie will receive e-mail when a student submits a thesis to the server. Kathie will login to the server and review the student's thesis online, then either "Accept" or "Reject" the student's submission via a web form. Reuben will give some training and write up some instructions.

Midge finished her final draft of the disclaimer that we'll ask the student to sign when submitting a thesis to the collection. The Dean wants Auburn's lawyers to sign off on the disclaimer - hopefully that won't turn into a mess.

I went to Tuskegee on Thursday to help with a few setup tasks for their new DSpace repository server. We managed to get a few things done; we registered the Glassfish v3 server as a windows service, fixed some issues with new user creation and thumbnails, and scheduled an automatic nightly backup. Dana and Rod are doing a great job administering the server, and adding collections. They plan to enable public access to the server in the next month or so.

We also tackled a couple other tasks this week. Liza is helping work through some issues with the new click-tracker code I released last week, and Clint is using the new vufind import tool to test some final changes to vufind's Solr index.

Thursday, May 20, 2010

2010/05/19-20 - Glenn to retire

The big news from Thursday's monthly staff meeting is that Glenn will retire in August. Glenn is a great guy - hope he has fun.

Otherwise I've just been multitasking between a few things. Jon and I rolled out the AuCataloging update yesterday. Everything seems to be working ok. I'm going to try to flip the click-counter data-view over to a PrimeFaces datatable, and do away with the existing custom YUI-table code that now throws errors in Chrome and Safari.

Clint and I met to merge in his Voyager-to-Vufind tool patch that fixes a record-format handling bug. I also just finished adding a log file to track bib ids that fail to transfer into VuFind - usually because Oracle goes down for reboot in the middle of the night.

Finally I exchanged a few e-mails for the Repository project. We're going to try to meet with Kathy from the Honor's College next week. Amy in the Business Office also indicated that we can include Auburn's annual reports in a collection, but she hasn't replied yet to a follow up e-mail I sent. I'll also touch base with Boyd next week on the LADC senior projects.

Tuesday, May 18, 2010

2010/05/17-18 - vufind patch and redlib1 build

I spent most of Monday and Tuesday coding the vufind patch described below, and building out the redlib1 server upgrade for the /AuCataloging webapp. I'll try to meet with Jon tomorrow to rollout the upgrade to the library web server.


--- "Reuben Pasquini" 5/18/2010 12:22 PM ---
Hi Clint,

I just updated the main vufind server
     http://catalog.lib.auburn.edu/ 
with this patch
    
http://catalog.lib.auburn.edu/cgi-bin/hgwebdir.cgi/vufind/rev/5f9d22ec998a 
to load the facet and search data in the
same index search rather than load
the facets via AJAX.
Let me know if you spot any problems.

Cheers,
Reuben

Thursday, May 13, 2010

2010/05/12-13 - OAI harvest to Vufind

The vufind-import tool now supports simple OAI harvest. I finally got an OAI-harvest to VuFind tool working. As a test I was able to harvest most of the 15000 records in Georgia Tech's ETD collection into Vufind on devcat.

The tool harvests the OAI metadata, then applies an XSL transform, and finally posts the result to the Solr server: http://devcat.lib.auburn.edu:8080/solr/biblio

The XSL for the GTech ETD collection is below. You can see what the import-app looks like here: http://redlib1.lib.auburn.edu:8080/AuCataloging/jar/vygr2vfnd.jnlp If you click on the 'OAI' tab, and reset the XSL-file field to a copy of the XSL below, then you can test the OAI import.

I spent some time yesterday setting up the Honors College Collection in our repository server. Boyd told me yesterday that the LADC staff that we need to talk with about setting up an online collection for LADC senior projects are not available during intercession. He'll catch up with them when summer term starts, and hopefully things will work out.

gaTech.xsl:
--------------------------------

<?xml version="1.0" encoding="UTF-8" ?>


<xsl:stylesheet version="1.0"
     xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
     xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" 
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     >
  <xsl:output method="xml" indent="yes"/>
<!--
http://smartech.gatech.edu/oai/request?verb=ListRecords&set=hdl_1853_4760&metadataPrefix=oai_dc 

-->

  <!-- pass-through rule -->
  <xsl:template match="@*|node()">
      <!--
     <xsl:copy>
       <xsl:apply-templates select="@*|node()"/>
     </xsl:copy>
        -->
  </xsl:template>


  <xsl:template match="/oai_dc:dc">
   <doc><xsl:apply-templates select="*" />
       <field name="format">Electronic</field>
       <field name="collection">Auburn University ETD</field>
       <field name="building">Auburn University Digital
Library</field>
       <field name="publisher">Georgia Institute of Technology</field>
       <field name="allfields">
   <xsl:for-each select="*">
       <xsl:value-of select="." /><xsl:text>
</xsl:text>
   </xsl:for-each>
      </field>
     </doc>
  </xsl:template>

  <xsl:template match="dc:identifier[1]">
    <field name="id">GTechETD<xsl:value-of
select='substring-after(.,"http://hdl.handle.net/1853/")'/></field>
    <field name="url"><xsl:value-of select="." /></field>
  </xsl:template>


  <xsl:template match="dc:creator">
     <field name="author"><xsl:value-of select="." /></field>
  </xsl:template>

  <xsl:template match="dc:subject">
    <field name="topic"><xsl:value-of select="." /></field>
    <field name="fulltopic"><xsl:value-of select="." /></field>
   </xsl:template>


  <xsl:template match="dc:title[1]">
     <field name="title"><xsl:value-of select="." /></field>
  </xsl:template>

  <xsl:template match="dc:description">
    <field name="description">
      <xsl:value-of select="." /></field>
  </xsl:template>

</xsl:stylesheet>

Tuesday, May 11, 2010

2010/05/10-11 - working away

I've just been dividing my time today and yesterday between configuring the new AuCataloging server on redlib1, configuring the new Honors College collection on repo, and adding XML+XSLT and OAI-harvest support to the vufind-import tool. Everything is going ok, but everything takes more time than I'd like.

The big new is Lori is back from her teaching internship. She seems none the worse for ware.

2010/05/06 - Honors College Theses Collection

I walked over to the Honors College today, and had a nice discussion with Kathie Mattox about establishing an online collection for the Honors College senior theses. The Honors College is very happy to support the collection if the library sets it up. I'll try to get our new repository in shape to support the Honors College collection over the next few weeks.

I'll try to walk over to LADC next week. Boyd is doing some leg work over there, but I haven't heard back from anyone yet.

Wednesday, May 5, 2010

2010/05/03-05 - Tuskegee and stuff

I spent Tuesday helping Dana and Rod at the Tuskegee University archive setup a DSpace repository server. By the end of the day we had a basic server up and running for them to experiment with. I told Dana that I'd be willing to come back out in a month to help configure the server with a Tuskegee look, enable thumbnails, and configure the collection metadata. Hopefully they like d-space - we'll see how it goes.

I spent some time this week on a spread sheet and diagram that consider library operations from a macroscopic view. I had a couple lively discussions after I sent an ill-informed e-mail protesting a staffing change, and that got me thinking about how things work at the library. Right now I have a model that organizes library services into three broad categories: manage research material, learning commons, and digital and special collections. The services in each category primarily serve different communities - graduate and faculty, undergraduates, and non-faculty/non-student respectively. I sent a description of the model to a few people - we'll see what they think.

Sunday, May 2, 2010

Library what if ...

Suppose we group MDRL, e-journal and database management (ez-proxy, link resolver, ...), web site and server admin (ETD, Voyager, Vufind), and transfer all that responsibility and budget to an "online resources" subgroup under OIT. OIT gets almost all the materials budget, the systems department, and a group of 4 or 5 e-journal and database specialists (Jack, Paula, ...).

What's left for the library ? A staff budget over $4 million to manage a book budget under $250K, reference, and ILL.

Wednesday, April 28, 2010

2010/04/26-28 - Back to Vufind import

I'm back working on the Vufind import tool. I updated to scala 2.8rc1 and the new netbeans scala plugin - which both fix several bugs, and run great.

I re-arranged the v2v classes into model, view, and controller packages. This organization works great with the ebscoX tool, and I'm happy with how the v2v code fits together. I'll now add OAI-harvesting support to the tool. I hope to re-use the OAI module that the etd tools access - we'll see how it goes.

Rod Wheeler from Tuskegee sent me an e-mail. I'll drive over to Tuskegee next Tuesday, and we'll try to get a d-space repository server up and running there. We'll see how it goes.

It's a quiet week otherwise. Jack has been out sick the last few days, and Helen is on a field trip to some North Carlona universities to tour their library technical services facilities.

Thursday, April 22, 2010

2010/04/21-22 - AuCataloging update ready

The new EE6-enabled AuCataloging webapp is ready for release. I spoke with Jon about installing Glassfish v3 on lib.auburn.edu, but he wants me to wait and deploy the new code on a separate server that he's building for the purpose. In the mean time I'll look at adding OAI support to the Vufind import tool.

Tuesday, April 20, 2010

2010/04/19-20 - finally some ref stats

A basic reference statistics webapp is finally up for testing. The code leverages littleware's new task-tracker APIs. I should be able to update the click-counter and cat-request apps to use the same engine. The new code also ports the entire AuCataloging site to java EE 6 including facelets and JSF annotations, so we need to upgrade our server to run glassfish v3. The following e-mail has more details.

From: rdp0004@auburn.edu 
To: libhelp@auburn.edu 
Subject: setup 'glassfish' user on www - give Reuben password 

Hello LibHelp!

Would you setup a glassfish user on napoleon
with a valid home directory (/home/glassfish or whatever),
and send me the password, so I can
install and configure the new glassfish v3 server ?
https://glassfish.dev.java.net/

I have several updates to the AuCataloging webapp nearly
ready to release that require the glassfish upgrade.
http://lib.auburn.edu/AuCataloging/

If you just setup a 'glassfish' user for me, then I can
take care of most of the setup, so I won't have to pester
Jon so much. 
Or - if you prefer, I can install the new glassfish
on some other server (maybe repo), then we can just add a 'proxy' rule
to www's Apache config.
Once I have the v3 server ready to go,
then we can shutdown the old v2 server installed
under /glassfish. 
What do you think ?

Thanks for the help!

Cheers,
Reuben


--- Reuben Pasquini 4/20/2010 1:59 PM 
Great - thanks for the feedback! :-)

Let me know when you're ready to meet to polish this thing
up to work the way you want.
Hopefully we'll have time to beta test
before the next reference-stats collection.

I'm going to include what I have now as part of an update
to the AuCataloging tools that I'll try to release by next week:
http://lib.auburn.edu/AuCataloging/
I'll go ahead and add an
"output data to csv" 
link, so we can pull the data into Excel too - like the click-counter.
The reference data is saved to a database - 
so it won't go away on server restart.
I'll eventually update the click counter to use the database too.

Cheers,
Reuben


--- Liza Weisbrod 4/20/2010 1:47 PM 
This is great. Not that I don't love my paper forms, but this is
better....

--- Marcia Boosinger 4/20/2010 1:19 PM 
Yes, I could get to it and WOW! This is fabulous! Liza and I are
actually going to be talking in her evaluation tomorrow about starting
this as a project so this is perfect timing. I had no idea you were
working on it in this form. Thanks so much and we'll be back in touch
very soon.
Marcia

--- Reuben Pasquini 4/20/2010 12:48 PM 
Hi Marcia!

Can you connect to this link ?

http://lib_au15798.ad.auburn.edu:8080/AuCataloging/en/refStat/statForm.jsf

If so, then what do you think ?

Cheers,
Reuben

Aaron, Marliese, and I exchanged the brief e-mail thread below discussing a recent study of the changing role of academic libraries. I want to try to put together some concrete ideas that express a coherent strategy for the library that addresses the issues in the e-mail. Once I have my thoughts in order, then we can schedule some meetings to see what other people in the library think. Hopefully we can articulate an update to the library's strategic plan with action items that move the library in some new directions.



--- Aaron Trehub 4/19/2010 9:49 PM 
Interesting exchange.  Will add my $.02 when I get back.

Aaron 



--- Marliese Thomas 4/19/2010 10:58 AM 
Not so fast! I do agree with what you're saying. 
My thought was more, I guess, from the point of view of how it 
seems many librarians anticipate innovation. 
It seems so much of what we do and discuss is how to react to trends 
and devices than actually being involved in their creation or development. 
As you say, most people see us as a delivery system, not the innovators 
themselves. So, not so much with the beginning of Joe's comments, more 
how we prepare for something we don't know exists or isn't 
immediately relative (and incorporated) to current functions.  

:)
Marliese

-----Original Message-----
From: Reuben Pasquini
To: lib_systems 
To: Marliese Thomas 
To: Aaron Trehub 

Sent: 4/19/2010 10:50:03 AM
Subject: Re: Fwd: One Report, Two Findings: Library Roles Changing, Open Access Not Compelling

Marliese and Reuben disagree again! ;-)

I think the trends in the report are probably right -
faculty and students view the library as a purchaser
of online databases, and faculty are not interested in
open access unless it will help get tenure.

Joe is overly optimistic.
I don't think Joe's iPod argument applies.  We're more like
horse-buggy makers at the dawn of the automobile age.
I don't see the iBuggy on the horizon.

In fact we're already changing.
The MDRL and Study Commons have little to do
with supporting research.
A lot of the online work we do could just as easily
be done by OIT.

Suppose we group MDRL, 
e-journal and database management (ez-proxy, link resolver, ...),
web site and server admin (ETD, Voyager, Vufind),
and transfer all that responsibility and budget to 
an "online resources" subgroup under OIT.
OIT gets almost all the materials budget, the systems
department, and a group of 4 or 5 e-journal and database
specialists (Jack, Paula, ...).

What's left for the library ?
A staff budget over $4 million
to manage a book budget under $250K, 
reference, and ILL.

We need a new plan.

Cheers,
Reuben


--- Marliese Thomas 4/19/2010 7:49 AM 
I would agree with Joe's observation. It is difficult to see how far 
something can be taken if you don't even know what the new variables are. 
I think one of the challenging things about working with VuFind is grasping 
how much can realistically be done to it and how much 
are we asking Reuben and Clint to accomplish. Additionally, I 
believe many librarians err on the side of being cautious of how 
change will affect students (and jobs) because we can't anticipate 
how new things will be received and/or if they will become 
ubiquitous (iWhatevers). In the realm of teaching, there is an extra 
caution about teaching something that potentially will not be there 
or will be greatly changed, and that change will not be fully understood 
by the students. 

Interesting. Thanks, Aaron. 
Marliese



--- Aaron Trehub 4/18/2010 10:19 AM 
Folks,

Some more comments on the Ithaka report, which can be found at:

http://www.ithaka.org/ithaka-s-r/research/faculty-surveys-2000-2009/faculty-survey-2009

Aaron

--- Joseph Esposito  4/16/2010 3:44 PM 
One could argue that the survey got it right or wrong, but I hope 
no one puts too much time into surveys.  People don't know what 
they think.  They can't imagine innovations.  A survey of 
physicists 20 years ago would not have given us arXiv.  That took 
Paul Ginsparg.  A survey of anyone 10 years ago would not have 
offered us the iPod, iPhone, and iPad (though someone could have 
predicted the dumb names).

Those products, which are transformative in modern society, 
required Steve Jobs.  The future of libraries and the status 
among faculty is something that must be invented.  Find the 
inventor.

Joe Esposito

--- Philip Davis  4/15/2010 5:06 PM 
One Report, Two Findings: Library Roles Changing, Open Access Not Compelling
by Kent Anderson
Scholarly Kitchen
April 15, 2010
http://j.mp/92pRAi 

quote:

"It's been a fear among librarians for decades, a perception 
among publishers for years, and now a survey shows it's now a 
clear opinion among faculty and researchers -- libraries are 
increasingly viewed as information purchasing agents inside 
academic institutions rather than intellectual partners.

An unrelated perception that's been argued for years is that open 
access is of dubious value to scholars, with their dedication to 
its ideals hardly rising above lip service."

Thursday, April 15, 2010

2010/04/12-15 - death and taxes

It has been a weird week at the library. Henry died on Friday, Tom died on Sunday, and taxes are due today.

I installed the mobile-vufind code on our public server at http://catalog.lib.auburn.edu/mobile/.

I met briefly with Midge to discuss her work towards digital collections for honors theses and architecture senior projects. She's working away on a copyright and liability disclaimer for the undergrads to agree to. Hopefully we'll approach the Honors College and Architecture in a few weeks.

Finally, I made good progress on a web-tool for collecting reference statistics. I'll hopefully be able to release a beta next week or the week after.

Thursday, April 8, 2010

2010/04/08 - refStats with READ

The littleware.apps.tracker framework passed some initial regression tests this morning, so I decided to go ahead and begin some setup for the "references statistics" web application. Marcia wants to use the READ system for collecting reference stats. I'll just deploy the ref-stats tool as another sub-application under the AuCataloging web app. I need to upgrade AuCataloging to organize each sub-applications in its own folder (click-counter, cat-request, login-support, etd, ref-stats, ...), and to use java EE 6 bean annotations and the littleware session runtime. Hopefully I'll have a simple demo ready to play with by by the end of next week.

Wednesday, April 7, 2010

2010/04/06-07 - mobile vufind, tracker testing

Yesterday I integrated the vufind mobile theme with our devcat vufind instance at http://devcat.lib.auburn.edu/mobile/. It's always nice when someone else does most of the work - thanks vufind.org! I'll check with Clint about pushing the mobile site out to the production server tomorrow.

Today I got the tracker task-tracking system up to beta-testing stage. The first test is failing with the derby database I test against - I'll work on it some more tomorrow.

Monday, April 5, 2010

2010/04/05 - repo is go!

Dean Bonnie sent an e-mail last Friday that she spoke with Midge about setting up collections on repo for Honor's College theses and Architecture Senior Projects. The dean wants to ensure that a student accept a publication agreement which certifies that a submission is not plagiarized or libelous. That shouldn't be a problem. The good news is that we can go ahead with the project.

I spent a little time working on repo today - fixing some CSS rules that were loading images from a dev server, and setup SSL protection for login credentials. I also worked on the littleware tracker system.

Thursday, April 1, 2010

2010/04/01 - Google Topeka

Happy April Fool's Day! Google has a "Topeka" banner as a joke.

I'm making good progress on the littleware.apps.tracker package, but still have at least a week of work to go before I'll be ready to start on the reference-question survey webapp.

No response from Dean Bonnie on the Honor's Theses and Architecture Projects collections. I'll have to track her down next week.

Wednesday, March 31, 2010

2010/03/29-31 - advocate for library repository

I've made a little progress on my task list over the last few days. First, I setup our devcat vufind test server to access a Solr server on repo.lib.auburn.edu. I suspect that we'll see some improvement in VuFind performance with Solr running on a physical disk rather than on the VM virtual disk. The repo box also has a ton of memory and runs 64 bit Solaris, so we can configure Solr to access a 3GB java heap rather than the 2 GB maximum on our 32 bit Linux install.

I also dusted off the repository server we setup for Claudine's ag-paper project. I sent an e-mail to Bonnie asking again to ask the Honor's College and School of Architecture if they're interested in setting up online collections for honor's theses and architecture senior projects. Bonnie hasn't replied to me yet - ugh!

Finally, I began to add task-tracker infrastructure to the littleware API. I would like to use littleware to back the reference-question statistics system that I want to work on with Marcia. We will also be able to use the task-tracker to manage data for the click-counter, cat-request, and user-feedback systems.

Friday, March 26, 2010

2010/03/24-25 Reuben in swap

I'm sort of swapping between a few projects now. First, I need to spend some time on the Institutional Repository server to get it ready for the Ag-paper collection now in scanning.

I want to setup a Nagios server to monitor the health of our various library services. I was reminded of the need for this kind of thing on Thursday when a mod_jk log file maxed out to 2GB brought http://lib.auburn.edu down. Anyway - I brought up the OpenSolaris virtual machine I was playing with a while ago, and successfully booted a sparse-zone with a MySQL server as a proof of concept. I'll setup a zone for a Nagios-demo server. Probably systems won't want to deploy OpenSolaris in production, since they're more comfortable on Linux, but it will be easy to migrate the Nagios config files to a Linux box once the prototype is setup the way we want it.

I'm waiting to hear back from Tuskegee's library on when they want me to visit to help setup a DSpace repository server. We finally decided to just install DSpace directly onto their Windows Server 2008 box. Microsoft's licensing would tie us into knots if we try to build a Windows-based VM at Auburn, then deliver it to Tuskegee. Too bad - that would have been cool.

The Extensible Catalog guy wrote me back today. It looks like XC is focussed on their middleware development plan. I don't think Auburn should invest in XC at this point - it's not clear to me where we would use it unless we enhance our Vufind backend, and the XC guy didn't seem interested in that idea at all.

Speaking of vufind - I'm copying the devcat Solr index to repo to run a test where devcat's Vufind install uses a repo-hosted Solr backend. The repo server has a ton of memory, and Solr will run directly on physical disk rather than virtual-machine emulated disk. I'm curious if we'll see any performance boost.

Finally, I have a backlog of updates to AuCataloging code. I want to update the UI for etd2marc and etd2proquest, bump the AuCataloging webapp up to Java EE 6 JSF with Facelets on glassfish v3 (we currently run glassfish v2), migrate the click-counter and cat-request web services to use a littleware node-database backend, and extend the Voyager-to-Vufind tool to become a general Import-to-Vufind tool with support for OAI-harvest and XSL transform to Solr.

Tuesday, March 23, 2010

2010/03/22-23 lots of e-mail

I sent several long e-mails over the last couple days. I'm sure the recipients find them annoying, because looking at them now annoys me. The take-home messages are that Microsoft hyper-v licensing sucks, LC does a poor job publishing authority records, and I'm dubious about the value of the Extensible Catalog.

I also published updates to the ebscoX and vygr2vfnd tools to Google code.


--- Reuben Pasquini 3/22/2010 1:09 PM ---
Hi Hellen!

I downloaded the authorities data available from
     http://id.loc.gov 
, and it looks like the download only includes "subject"
authorities.
That's also what LC indicates on their web site:
    http://id.loc.gov/authorities/about.html
I extracted a summary of the data here:
    http://devcat.lib.auburn.edu/tests/summary.txt 

We could write a tool to harvest authority records from LC's
Voyager backed authority web site:
    http://authorities.loc.gov/ 
, but it's not an elegant solution.

I sent the following e-mail to LC.  I'll let you know if I get
a reply.

Cheers,
Reuben


---------------------

Thank you for posting the LCSH authority data at
     http://id.loc.gov/authorities/ 

I notice that the id.loc.gov data set only includes subject data.  
Does LC offer bulk access to its name, title, and keyword authority databases, 
or is that data only available one record at a time from http://authorities.loc.gov/ ?

Also, does LC publish the updates to its authority database via RSS ?

We would like to setup a mechanism where we can automatically 
synchronize our catalog's authority database with LC's database.  The combination of
bulk-download plus RSS updates would give us everything we need.

Thanks again,
Reuben




--- Reuben Pasquini 3/23/2010 10:24 AM ---
Hi Rod!

I just got some more information from Jon about hyper-v, and it looks like it's more trouble than it's worth.
Sorry to put you through all the work and e-mail.
I'm now inclined to just install d-space directly onto your Windows 2008 server.
I can drive out there some morning, and we can just install everything together.
That will probably take a couple hours to get a basic server up, and we can spend another
couple hours to setup a Tuskegee look and feel.  I can come out again a week after that
to try to answer any questions that come up after you play around with the server a bit.
What do you think ?

Cheers,
Reuben


--- Jon Bell 3/22/2010 4:01 PM ---
...
 
The licensing for Windows Server 2008 R2 Hyper-V, as far as I can tell, requires that you do not use the server for any other role (web server, file server, shares, etc...) while you are using the 4 free licenses.   Keeping a test VM here for a long length of time would impede us from using the server for a Ghosting server, for which it was originally purchased.  I was under the assumption you'd build it here and hand it off to them, then we could start to use the server for Ghosting.
 
It looks to me that with all this licensing troubles the simpler solution would be Linux and VMware Server 2.0, both free.
 
Jon
 


--- Reuben Pasquini 3/22/2010 1:32 PM ---
Hi Rod,

Hope you're having a good Monday.
I just wanted to give you a quick update on 
the Tuskegee d-space project.
At this point we believe we'll require a license
to run Windows in a virtual machine.
Jon, one of our IT experts, is trying to verify that,
but that's our impression.

Assuming Microsoft requires a license to run Windows in a VM,
then we'll need a license from Tuskegee to setup the
VM here at Auburn.  If we setup the VM with an Auburn
license, then when we transfer the VM to your server
Tuskegee will be running an Auburn-licensed server,
which probably violates Auburn's contract with Microsoft.

This license thing looks like a mess.
If you have easy access to two Windows licenses
(doesn't matter which flavor of Windows) that
you can send us (one for the production VM, another
for the test VM), then
we can continue down the road of running Windows
in a VM.  

If you do not have easy access to Windows licenses for the virtual machines,
then we can change our plan to
either run d-space in a Linux VM (Linux is free - no licensing), 
or install d-space directly
onto your Windows Server 2008 box without a VM.
The linux option is my preference, but we can 
make either way work.

Anyway - let me know whether you can supply a couple
Windows license keys or install-media to us if we need it,
or which of the other options you prefer if Windows in a VM
won't work out for us.

Cheers,
Reuben


--- Reuben Pasquini 3/23/2010 11:59 AM ---
Hi Dave,

I'm Reuben Pasquini - one of Auburn's software people.
I'll try to lay out my impression of what XC is, and
what I think we at Auburn would hope to get out of it
if we invest in it.
I'll be grateful if you have time to read over this - let us know
what you think.
I don't speak for anyone else at the Auburn library.

I'm excited about the potential for us to join a team of
developers to work on software tools for use at multiple
libraries.  It seems a shame for small software teams at
different libraries to code different solutions to the same problems,
rather than somehow combine our efforts and attack
more and bigger challenges.
On the other hand, I'm not convinced that XC in itself offers
something useful to us at Auburn.

It's not my decision, but I suspect we will only join XC if
we see a way we can directly use the XC software in our environment.
We would also not be happy to merely accept coding assignments from
the core XC team; we would require input into the design, development process,
and priorities of the project.

I'm involved with several initiatives at Auburn.
We have deployed a Vufind discovery service
    http://catalog.lib.auburn.edu/code/ 
, and a d-space based ETD server
    http://etd.auburn.edu/ 
We hope to make progress on an institutional repository 
this year
    http://repo.lib.auburn.edu/ 
We also want to explore ERM solutions, 
open-source ILS, and we're beta testing a commercial
discovery service that integrates article-level search across many
of the e-journals and databases we subscribe to.

Here's my general impression of what XC is based on watching
the screencast videos (
     http://www.screencast.com/users/eXtensibleCatalog
) and a quick browse of some of the code on google.

  *. A SOLR-based metadata database

  *. Java tools for OAI and ILS metadata harvesting into SOLR 
          using XSL for format conversion

  *. PHP-in-drupal web tools for interacting with SOLR and
           the importers

XC is not an ILS, an ERM, an IR, or a federated
search engine.  XC could support article-level search only if we
can get the article contents into SOLR.
Drupal brings along a platform of services,
but Auburn has access to a commercial CMS via the university,
and we have a lot of in house experience with media-wiki and word press,
so we're not particularly excited about drupal.

Does that sound right ?
If so, then integrating XC with Vufind is
one possible area of collaboration.
XC seems very similar to Vufind.  Vufind stores 
metadata in a SOLR server, has a PHP and javascript web frontend
that accesses that data, and java-based tools for harvesting
metadata into SOLR.  We customized our Auburn Vufind instance
       http://catalog.lib.auburn.edu/ 
to remove its dependency on MARC.  
We have an XSL-based crosswalk in place to harvest
data from our Content-DM digital library:
        http://diglib.auburn.edu/ 
, ex:
        http://catalog.lib.auburn.edu/vufind/Search/Home?lookfor=Caroline+Dean&type=all&submit=Find
There's slightly dated information on some of our code here:
      http://catalog.lib.auburn.edu/code/ 
      http://lib.auburn.edu/auburndevcat/ 

The core http://vufind.org project has several weaknesses including variable code quality,
lack of regression tests, and the necessity for implementors to customize
the code base.  The vufind code Auburn runs is different from the code 
Michigan or Illinois run.  
We have posted our code online, but most other sites do not.
The choice of PHP as an implementation language and the lack
of discipline in locking APIs means that merging updates from the core project into
our code base is often a chore, so it's easier to just fork and grab random
patches that interest us.

I could imagine a 12 month XC-Vufind collaboration something like this:

   *. Phase 1 - integrate Auburn Vufind with XC
            x. Merge Solr schema
            x. Merge import tools.  Leverage XC import manager
                    if such a beast exists for ILS import and OAI
                    harvest into SOLR
            x. End of phase 1 - XC participating libraries all
                   run XC-Vufind discovery instances

     *. Phase 2 - cloud support
             x. Extend XC-Vufind to support multiple SOLR backends
             x. Host a shared SOLR server that holds the common open-access
                     records that all the XC-Vufind libraries search against
             x. Integrate easy cross-institution ILL in the discovery layer

Anyway - that's one way I could image using XC at Auburn.
I imagine that you and the XC team have a different vision for the project.
I'm interested to hear what you have in mind via e-mail before we
commit to attend your meeting.

Cheers,
Reuben

Thursday, March 18, 2010

2010/03/17-18 - Extensible Toolkit, MS Licensing

I have a major improvement to the ebscoX and vygr2vfnd tools nearly ready to release. I've setup a littleware.swingbase module that makes it easy to setup simple Swing UI's with persistent properties, UI feedback, and a tools menu. I hope to release an update next week.

Adam and Jon are looking into Windows VM-licensing for the Tuskegee d-space project. We think it makes sense to deploy the d-space server to a virtual machine, but the Tuskegee library staff doesn't have Linux experience. Unfortunately it looks like Windows has a greedy VM-licensing model, so we might have to either deploy d-space directly onto Tuskegee's host Windows server, or just go ahead with an Ubuntu VM, and give the Tuskegee guys some basic training.

Finally, Aaron is considering whether the library ought to invest some time and effort into the Extensible Catalog project. I sent the following e-mail with my impressions.

Hi Aaron,

I watched the XC webcasts a while ago:
    http://www.screencast.com/users/eXtensibleCatalog 

I remember at the time I wasn't that impressed.
They have basically re-implemented
VuFind with a few more features like integrated OAI harvesting,
decoupling from MARC (what we did), etc.

    *. PHP front-end - VuFind is PHP-Smarty based,
                   XC integrates with Drupal CMS
                        http://drupal.org/

     *. XC integrates with the ILS via some NCIP thing -
           VuFind has an ILS-plugin system

     *. XC dumps all their data into the "Metadata Service Toolkit" -
           which is a SOLR server - same as VuFind.

XC is yet another project that dumps metadata into SOLR and
slaps a PHP web UI in front of it.

I am very interested in having a conversation about plans for
ILS and discovery software, and strategic plans for the library
in general.  My first impression is that XC is not a good place
to invest our time and effort.

Cheers,
Reuben

Tuesday, March 16, 2010

2010/03/15-16 Voyager upgrade

Monday and Tuesday were quiet days in cataloging this week as Auburn's Voyager system has been offline for an upgrade to version 7.2.

I finally posted a web-start version of the ebscoX tool on Google Code. I sent the e-mail below to Marliese and the libdev group with the details. I continue to work on the code underlying ebscoX and vygr2vfnd to make the tools easier to use and maintain. I'm also patching vygr2vfnd to directly use the ebscoX MARC-filter code. I hope to release an update to both tools next week.

I've also exchanged a few more e-mails with Rod and Dana at Tuskegee. The basic plan is to setup Tuskegee's Windows 2008 R2 Enterprise server to host a hyper-v Windows virtual machine guest that runs DSpace repository software. Rod is going to verify that their server has the hardware virtualization support that hyper-v requires. Adam and Jon are already helping work things out.


--- Reuben Pasquini 03/15/10 11:29 AM ---
Hi Marliese,

If you still want to point EBSCO at our Voyager-to-Ebsco export tool,
then I finally posted the latest version of the EbscoX export 
tool to google code:
        http://code.google.com/p/littleware/

Just click on the "EbscoX" link (
      http://ivy2maven2.littleware.googlecode.com/hg/webstart/ebscoX.jnlp 
) to launch the application.

I'll try to improve the user interface over the next month or two.
The interface is pretty clunky, and the user needs to setup
a special properties file with the connection information for his/her
Voyager Oracle database.  I can help you do that for our database if
you want to give it a try.

There are brief instructions on the site if they want to download the code.
I'll write up something with more details one of these days.

Cheers,
Reuben

Thursday, March 11, 2010

2010/03/10-11 coding quietly

It has been a quiet couple days at the library. Yesterday I attended Beth's presentation on the Content-DM to VuFind export/import process. The process Beth and I worked out involves several steps and running some command-line tools from a Linux ssh session. That was fine for Beth and Clint, but it's going to be annoying for Midge or Marliese to have to deal with that. We'll have to write some little GUI tool to manage the process after the dm-data.xml and dm-data-convert.xsl files are ready.

Aaron and I have exchanged a few e-mails with Rod and Dana at Tuskegee. Tuskegee plans to experiment with a DSpace repository, and I'll probably lend a hand with the software install and setup.

Otherwise I've been plugging away at the AuCataloging refactor. I just checked in a patch, so that all our regression tests now pass, and the AuCataloging webapp has an IVY build process. I still need to finish up the build for the webapp, but by the end of next week I hope to point EBSCO and the VuFind e-mail list at the AuCataloging code repository, web-start apps, and a download zipfile with ready-to-run binary apps.

Tuesday, March 9, 2010

2010/03/08-09 Tool Exchange

I'm still polishing up the AuCataloging code, but the code is back in a running state with updates for scala-2.8, java EE 6, and a new IVY build process. I've setup separate build-projects for voyager-to-vufind (v2v), ebsco-export (ebscoX), and the shared auLibrary, and published the code to a Google Code repository. I'll try to move the click-counter and ETD tools currently bundled with auLibrary out into their own build projects over the next week too.

I helped Clint setup a v2v build and test environment yesterday, so he can work out a bug where the vufind import does not properly label our e-journals with a "journal" format facet - the e-journals only show up under the "electronic" facet now. I'm pretty happy Clint is looking at the v2v code - I was a little worried that I was the only one that knew how to work with it.

I still need to update the AuCataloging webapp to use an IVY build process, and I want to setup build rules that bundle web-start and downloadable versions of the v2v and ebscoX tools, so we can easily publish those apps online for other libraries and EBSCO to use. I'll work on that tomorrow.

I would also like to port the AuCataloging webapp JSF code to use java ee6 facelets and annotations, then retire the current ee5 JSP code. We'll have to update our app-server to the new Glassfish v3 server to use EE6 api's. I moved a non-library JSF project to EE6 on glassfish-v3 last week, and I like it a lot.

Friday, March 5, 2010

2010/03/04-05 Still Refactoring

We had a fun libdev meeting this morning. Clint gave us a review of the code4lib 2010 conference he attended last week.

I checked in more AuCataloging patches today that break out the Voyager-2-VuFind (V2V) tool to its own build project. The new V2V project uses an IVY dependency on the AuLibrary 1.0 artifact that I added to littleware's online IVY/Maven repository today.

I also wrote an AuburnIndexerTester regression for the V2V test suite. On Monday I'll sit down with Clint, and setup a V2V build environment, so he can work on the AuburnIndexer's getFormat() method.

Next week I'll break out the EbscoX tool to its own build project, so I can point the EBSCO developers at that code. I'll also republish web-start versions of the Voyager-2-Vufind and EbscoX tools to the project site on Google code. Once the web-start apps are online, anyone will be able to easily run those tools.