Release 1.6.7

This very minor NarraFirma release  adds a bit more functionality to the story-form building and translation facility. Specifically, you can now set (and translate) the “Does not apply” slider label in general, not only for specific questions.

As always, if you find any bugs – or find anything in NF confusing or hard to use – please tell me on the GitHub issues page.

Release 1.6.6

This smallish NarraFirma release adds some quality-of-life improvements and fixes some small bugs.

Display lumping + renaming + hiding

In the Catalysis part of NarraFirma, display lumping (on-the-fly data manipulation) is proving to be quite useful. However, I recently realized that it can be used for more than just lumping together similar answers. So I extended it slightly.

  1. You can now rename an answer to a choice question. This might be useful if you realize that some of the answers you used in your survey (or interview script) are too long for (or too confusing on) your graphs. For example, if an answer was “I’m not sure about that” you might want your graphs to say “Unsure.”
  2. You can now hide an answer that was rarely chosen. For example, if you asked people something like, “How often do you [do some thing]?” and 5 people out of 500 chose “never,” you can draw your graphs without that answer.

I did think about the fact that people could use these new functions to distort what people say. But there are so many ways to distort what people say that I don’t think withholding this functionality would deter anyone doing that if they wanted to. After all, you could just export the data to a spreadsheet, change it there, and import it again. Also, this functionality could be quite helpful in situations where the collected data is too messy to be useful without some cleanup. The general rule is: in participatory work, if you use any form of data manipulation, in any project, using any software, you should always be transparent about what you did and why.

Less verbose story cards

My original idea for story cards was that choice questions would provide context by displaying all possible answers for each story, with the chosen answers marked in bold. However, for some choice questions (those with many and/or long answers) a show-it-all display confronts participants with too much text to wade through.

So I have implemented an option where you can show or hide non-selected answers in your story cards.

  • Showing non-selected answers – Feel about: happy sad relieved frustrated hopeful hopeless
  • Hiding non-selected answers – Feel about: happy hopeful

I also made the between-answer display character(s) customizable, so your answers lists can say “happy/hopeful” or “happy, hopeful” or “happy and hopeful,” or anything else you like.

Custom annotation order

When you are annotating stories, you might not think of annotation questions in the same order as you would like to answer them. So I added an option to the “Write annotation questions” page to specify the order of annotation questions on the “Annotate stories” page. While I was doing that, I added the option to create headers above groups of questions (to make the annotation process easier).

New correlations report

As I use NarraFirma on projects, I keep wanting new reports (“if only I could see this”), so I “scratch the itch” and build them, then use them in my local copy of NF. Later, if I think other people might want to see the same reports, I clean them up and add them to NF. (I’ve been doing that for a long time.)

I recently worked on a project using NF, and I wanted (so I created) a summary correlations report. I think you might want to see that report too, so I kept it. NF can now spit out (to CSV) a table that summarizes the significant positive and negative correlations in all subsets of stories with all answers to all choice questions. This is a good report to glance over when you want a quick idea of where your correlations lie – before you go through your (possibly thousands of) subset correlation graphs.

If you are doing a project with NF and there is a report you would love to see, tell me about it on the GitHub issues page. Depending on what sort of report it is and how it fits into NF’s existing architecture, I might be able to build it for you (maybe quickly) and add it to NF. Of course, whether I will have the time to do that will depend on many other things (which are mostly out of my control). But please do reach out if this happens to you, because I would like to hear what would make NF work better for you.

Spot check -> Review

I decided to rename the “Spot-check graphs” page to “Review graphs.”  I did this because recently (well, recently in the glacial way my mind works) it came to my attention that some NF users have been using that page not to spot-check graphs for completeness during story collection (as I thought they would) but to look for patterns in their data without going through the catalysis process.

I had not thought of that use for that page. But I can see that the full catalysis process might not seem worth doing on very small projects. In the last NF update (which happened after I heard about this) I improved the spot-check/review page. This time I thought I’d rename the page to address the confusion people must have over what it can be used for.

By the way, I did think a little about building a sort of dashboard page that (like the survey) could be accessed by project participants who do not have access to the entire project. That’s a harder task than it may seem. Just in case anyone wants to know why, I’ll explain.

Every PNI project needs to have a control point somewhere in the collection process. This is because people often volunteer information you asked them not to provide, such as their phone numbers or the phone numbers of other people. (You would be amazed how often people do this.) As a result, all PNI projects must go through a phase in which personally identifying information is “scrubbed” out of the stories and other data. This has to be done before the stories (and other data) can be shown to any participant groups. This is why the code that runs the NF survey is completely separate from the code that runs the NF project management interface. They are basically two different pieces of software.

So a NF dashboard that is visible to project participants would have to have some way to show people only the stories and data that had been reviewed and marked as safe to share. The data architecture of NF doesn’t currently have any way to mark stories (or story collections) as safe to share. We would have to add a way to enter and store those markings. That’s not hard to do, but we would also have to find a way to deal with legacy data that has no such markings. Dealing with legacy data has been a big part of my work on NF for almost a decade. I’m proud of the fact that the very first NF projects can still be read and used without any translation issues. I want to keep that record clean. I think probably the best way to do that would be to assume that data without markings cannot be shared, but I would like people to be able to mark legacy data as shareable using some process.

Also, building a participant dashboard would require us to write a whole new set of separate access and display scripts, both client and server, that are similar to the survey scripts but that require a new intermediate level of permission (and a permission-granting interface that also deals with missing permissions in legacy projects). We could do all of this if we had the time and funding to do it, but right now we don’t. It’s a good someday idea, though. Who knows, maybe someday we will get a nice big grant so we can make NF work even better for everyone. :-)

Bug fixes

I also made lots of little tweaks to the software as I used it and talked to users about it. For example:

  • I cleaned up the table of links on the application’s home page to make it easier to see and access your story collections. I also cleaned up some of the page descriptions on section pages.
  • On the “Explore patterns” page, I moved the export buttons (there are now three) from the bottom of the page (where they interfered with writing observations) to the top. I also made them smaller and moved them to the right, where they will hopefully not be as distracting as they were.
  • I fixed a few small display bugs, where interface elements appeared where they were not supposed to be.
  • I fixed the fact that I previously forgot to include the write-in answer label in the translation dictionary (thank you to the helpful user who noticed that mistake and brought it to my attention).

As always, if you find any bugs – or find anything in NF confusing or hard to use – please tell me on the GitHub issues page.