Skip to end of metadata
Go to start of metadata


  • Working with Samvera community, develop a process for ensuring Hyrax releases are stable and tested.


  • To understand all of the features in Hyrax releases and document them. 

Google Hangout:


Chris Diaz

steve van tuyl

Sherry Lake

Julie Rudder

Harsh Parekh


Michael J. Giarlo - dev team

Jenn Colt - ux team

Michael Tribone and Kate Deibel said they were willing to consult on process for accessibility testing. 



Why not automate, why human testing?

What about UX and accessibility?

Meeting notes:

We will produce two documents:

UI Interactions = Call it "Hyrax Feature Guide"

UI Testing Tracking = Call it "Release testing template" and make a new version of it with every release. 

Complete Hyrax Feature Guide document - for Hyrax 2.0 

  • How do we do this before sandbox? 
  • Decide Audience: -We want to make this a manual that explains all the features, and admin. 
  • 1.3. walkthrough - Chris (mark features that are not in 1.3, add missing interactions) by Sept 1st. 
  • Regarding Format: categories seem good, the repeating the word interaction is distraction. Needs more examples, that address "Use this for" examples. Ask Steve what he has in mind.
  • What is in Hyrax 2 and when can we get a list.  

Release Testing Process Notes: 

  • Need bug reporting template. 
  • What is a time frame that is acceptable for returning results of test? (2-3 weeks) 
  • The turn around time for updating the 2 important documents is of concern. 
  • We think we need 3 institutions to commit to this work for each release ahead of time. Would be good to rotate these institutions. 
  • We would love for 2 institutions to commit to upgrading in-production systems before wide-release but realize this is hard. 
  • Minor release should have less process? 
    • update the 2 documents
    • new features should be tested
    • how thoroughly do all features need to be tested? 
  • Accessibility testing should be included in tab, but needs some expertise. What is the minimal that we support? Should automated testing be done as a minimal, consult with Michael and Kate. 

Before the Release Testing Template is complete, we need to complete UI interaction and answer these questions: 

  • What browsers and platforms should be included, what are supported? 
    • After asking the UX group and the Repo-managers here is a suggestion about how we decide what browsers. 
      • Include browsers that the majority of users used based on data from
      • Include support for high-use browsers in different areas of the world.
      • Are there browser versions to consider for accessibility?
      • Admin and staff tools may consider reduced amount of support for certain platforms. 
      • Browserstack is a good tool for testing. 
      • The list of browser support needs to be considers for each release. 
    • List to test for Hyrax 2.0 release will include the following: (browser version popularity)
      • Chrome on Windows 
      • Chrome on Mac OS X 
      • Chrome on Android (for public interactions) (most popular android versions)
      • Safari on iPhone (for public interactions)
      • Firefox and Safari should be added (for public interactions?)
      • Use (any browser?)
      • Use Firefox for NVDA testing with a screen reading. Either Windows or Mac
    • The testing document will include what versions of OS and browsers. Institutions are welcome to test additional browsers to indentify gaps and report issues. 

To do's: by Aug 18th

Ask repo-managers what platforms and OS should be tested/supported

Ask Michael/Kate testing for accessibility

Develop a list of features to be included in 2.0 

Chris does 1.3 walkthrough 

Finish a template for the testing document,

Draft template (see comment below on contents)

  • Add instructions for how to use the document -Sherry 
  • Mirror the structure and organization of the UI Interactions documents.

Bug reporting template. Would the new issue helper text work or do we need something else from students? 

  • No labels


  1. The UI Interactions document looks great. I think each section should be what we test. We shouldn't have to duplicate all the interactions on the spreadsheet, or do folks think each interaction needs to be taken apart? I can make a mock of the template if you want by cutting-N-pasting.

    Is the UI Interaction document complete? No use adding text from the document to the test template until it is "complete".

  2. Sherry Lake made comments on the bug reporting image, but you can only see them (tagged) if you click the image.

  3. I prepared a similar spreadsheet based on our process at ND and accomodating the featues from Julie's example.


    • This spreadsheet focusses on test outcomes for a specific Hyrax release (It's important for a tester to know which Hyrax version is being tested and that should be captured in the spreadsheet)
    • To reduce noise of  browser versions I've indicated only major point versions
    • README tab serves as a place to capture metadata, additional instructions and definitions of statuses used in the spreadsheet
    • This spreadsheet refers to use cases by 'use case numbers' which I'm recommending on the Hyrax UI internactions. The format of use case number is open for discussion.
    • Apart from 'Use case titiles' I am assuming the 'Hyrax UI interactions' document as the single source of truth. This will assist in keeping the communication consistent of what/how stories were tested when bugs are being discussed or investigated. 
    • I've kept each category as a separate spreadsheet tab for sake of simplicity
    • Acceptance criteria is analogous to the 'OK looks like' field of the spreadsheet Sherry Lake has started

    Open questions:

    • I do not see value in capturing date of testing if we focus on testing by specific hyrax releases. What do you think?
    • Is there value in capturing the name of tester? I see that being relevant only when a bug is being reported.
  4. Harsh Parekh Sherry Lake I think these are looking good and we should finalize a template maybe discuss pros/cons of approaches?. We should set a meeting time. I'll send a doodle poll - check slack for a poll! 

    Also re the questions:  I think we need to track name (or something) so we can follow up with distributed testers as needed. Another thing i want to track is (bug reported in github, y/n). These are both things to help the distribution tracking easier, less about the test itself. 

    Lastly, see this document that documents some of the decisions we've made so far. Just an FYI that it's there.

  5. Update based on my action items from our hangout yesterday:

    1. Hyrax UI Interaction Descriptions
      • Added a numbering scheme to each section and sub-section
      • Created an index for easy navigation
      • Copied 'Concepts and definitions' from 'Hyrax Repo Management Guide'
    2. Hyrax Repo Management Guide
      • Formatted headings to create an index for easy navigation
      • categorized information that needs to align with the 'Hyrax UI Interaction Descriptions' document
      • Made a first attempt to move subsections of the document into these categories

        1. Category: Search and discovery

        2. Category: Main Navigation

        3. Category: Dashboard

        4. Category: Works

        5. Category: Administration

    3. TODO: Focus on phasing out 'Hyrax Repo Management Guide' and making the 'Hyrax UI Interaction Descriptions' document as the final source of truth

      1. Resolve comments in 'Hyrax Repo Management Guide' and update the definitions in 'Hyrax UI Interaction Descriptions'
      2. Review the UI interactions section in 'Hyrax Repo Management Guide' for accuracy and add/update each of the subsections into the right  subsection of 'Hyrax UI Interaction Descriptions'