Student projects in SeaDragon
I think you’ll agree that this is a vast improvement over our previous method of distributing boards, which can be seen here:
The workflow we’re using is as follows:
- Open the presentation board (usually submitted as a PDF document) with Adobe Photoshop, rasterizing the file at about 5,000 pixels wide (around 100 Mb).
- Save the resulting image as a TIF image, because Deep Zoom Composer can’t open vector files. If there are multiple boards, save each as a separate file.
- Create a new project within Microsoft’s Deep Zoom Composer, and import the TIF image(s).
- Place the TIF image(s) via the Compose panel.
- Export the project in SeaDragon Ajax format, and copy the GeneratedImages folder to our webserver.
A few notes about this process:
Ease of use
While the process of putting together any one presentation layout is extremely easy, it will need to be faster as we scale up how we use this technology. The digital library we maintain at the school has more than 30,000 images – so doing these by hand will get old quickly. One of my next research questions will be whether we can automate any parts of the process.
My ideal solution here would be to provide a PDF document to a script, and have it follow the workflow above. Other deliverables from the script would be a thumbnail image, the requisite database entries in our CMS, and a link to the original PDF for people to download should they wish to.
SeaDragon vs. Silverlight
When I first saw this technology, I was impressed that it allowed users to intuitively explore large graphical datasets. Unfortunately, however, I can’t find good penetration data for Silverlight. The most recent references claim it is around 20-25%, but these go back to February of 2009. Hopefully with Microsoft using the technology to broadcast Wimbledon and the 2010 Olympics that figure will go higher, but until it approaches ubiquity I’d rather not fork the user experience for something this central to our traffic.
The thing I’m less clear about, at this point, is how technologies like this address accessibility concerns. Surely a screen reader would not be expected to describe the various regions of a layout as they become visible, but in some ways that might be what is called for?
As always, any thoughts or feedback that you have would be most welcome. The student galleries are overdue for a redesign (more about that in a separate post), so this latest development is more about testing SeaDragon than about great improvements to the galleries themselves.