How do you handle narrative data such as the words that interviewees use?
1) https://narrafirma.com/ (Cynthia)
2) sentiment analysis, word clouds
3) what is a difference between word clouds?
4) Analyse the words each person uses
a. form a concordance; defines a short or long vector of important words
b. count uses of each word in transcript from each person; each person is a vector in a multidimensional word space
c. apply cluster analysis, discriminant analysis, etc., to these data points to find clusters, trends, differences, divisions, heat maps, etc.
(see, for example, Andrews’s plots; https://www.statmethods.net/advstats/cluster.html)
Cynthia Kurtz <cfkurtz@cfkurtz.com>
Lev
Basement pictures
Urban farm
Sequencing machine
I guess I need to be careful to define bad data...you're the second person to suggest it's data you don't like
Bad data is data that doesn't meet the assumptions you wanna make for your analysis
'about' or 'at least' .... did I send you the hedges paper??
Maybe you'd be interested to know Peggy
Scott Ferson wrote to Cynthia Kurtz:
Surfing on line, I found your consulting practice and software, which is very interesting to me. I’m wondering if I should investigate it more deeply as a possible way to extract information that is buried (encoded) in narratives. Some colleagues from archeology and linguistics met this afternoon with us engineers and ecologists on methods to handle “bad data”, including information in ethnographic information (stories). Would NarraFirma be useful in that, do you think? I know nothing about ethology, and embarrassingly nothing at all about what you’ve been up to all these years, but it seems to me that you’re working in an area that I need to learn more about, or at least get a student to learn more about if I am too old a dog for a new trick.
I should say that I am completely convinced about the essential importance of story telling, and narrative formation, in human interactions. (Almost as much as I am convinced about the role of /play/ in learning.) I really think that scientists and engineers need to up their games in this area.
Cynthia Kurtz responded to Scott:
Bad data! That's funny. I guess whether data is good or bad depends on what you want to use it for. Stories are excellent data for what I help people do with them. However, I have been going to great pains for nearly two decades to convince people that participatory narrative inquiry is not a science; it's a conversation. By framing the practice in this way the focus moves from proof (which is next to impossible with stories) to utility (which is perfectly achievable). I sometimes get pushback on this stance from colleagues and clients, but having been (or at least having trained to have been) a "real scientist" I think it's a mistake to claim things one can't support.
The vital step in pursuing utility without proof is to rely on group sensemaking - meaning, we get a bunch of people who have something to gain from making sense of collected stories to work with them in structured exercises. The result is a series of insights, recommendations, and ideas that are almost always worth having. The approach has proven to be useful in hundreds of projects across a variety of goals and contexts. (The bulk of the method was developed through six years of research work funded by DARPA and the government of Singapore.)
You asked, "I wonder how you confront uncertainty in stories. How do you capture it and represent it when you do?" Uncertainty in stories is one of the things we seek out, represent, and work with. We do this in three ways.
First, we deliberately elicit uncertainty in stories by using wide-ranging questions that encourage people to choose varied stories from their experiences. The result is that the mix of collected stories provides what I call "narrative richness" around a central topic. For example, we might ask people a question like, "Can you think of a day when you felt exceptionally anxious about [something] - or the reverse, exceptionally composed? What happened that made you feel that way?" The stories we get out of a query like that give us a gamut of conditions to explore.
Secondly, after each person tells their story, we ask them some questions about it. We often include questions about uncertainty - e.g., "Did the person in this story always know what was coming next, or were they often surprised?" Then we use the answers to those questions to map uncertainty against a variety of other factors, like how people behaved, where and whether they got support, and so on. We use mixed-methods analysis to support sensemaking, so that people can use the patterns that emerge from hundreds of answers to questions about stories as well as the stories themselves.
Thirdly, the exercises we use in group sensemaking address uncertainty by including it in the discussion. For example, one of our excercises derives a landscape of meaning from the stories collected. To do this people place stories on axes. Uncertainty might BE one of the axes, but even if it is not, where there is uncertainty about the placement of a story, we have people split the story and put it in two or three places, denoting on the landscape the connection between (and sometimes the tension between) the placements. In other words, we make uncertainty an element of the artifact we build, so that when we are finished we have captured/represented something about uncertainty as well as about the topic at hand. This aspect is similar to other decision support methods such as future search or scenario planning, with the (to my mind) critical difference of using real-life stories as a 'ground truthing' check on what might otherwise be assumptions dressed up as estimates.
That might have been too long an answer. ;)
I read the paper you sent on expert elicitation; it was fascinating. My reaction was to wonder if it might be useful to have experts not only place estimates on their estimations but also describe their estimations qualitatively - as in, what KIND of "about" or "at least" guesses they are talking about. I don't know what those qualitative descriptors might be, but my intuition is that there might be descriptors that would provide context that could be enlightening, especially when many such descriptors are considered together. But that may not be what you are looking for. My focus these days is never on proof; it's always on utility - as in, what can we achieve by having experts characterize their estimates? What can we do to improve our planning and policychoices, and what can we do to improve the processes that underlie our choices?
That brings me to one of the things I've been interested in / working on since those Stony Brook days, off and on, in addition to the story work: applications of complexity theory to decision support. I wonder if your Institute for Risk and Uncertainty has got into any of that? In the early aughts I was involved with the development of the Cynefin framework, which has been influential in the business and organizational world. This paper (which I wrote about 80% of) has been well regarded: http://alumni.media.mit.edu/~brooks/storybiz/kurtz.pdf
(That PDF is technically not supposed to be available, but apparently IBM has not noticed this guy having it up on his web site for so long...)
Later I split "my" part of Cynefin out to create the Confluence framework.
http://www.storycoloredglasses.com/p/confluence-sensemaking-framework.html
I need to get that published in a peer reviewed journal but have not got around to it yet. It's on my to-do list, but I'm also working on three nearly-finished books on story related things. I do have a paper coming out in the Journal of Policy and Complex Systems any day now. It's about the ways aspects of complexity have been distorted and "tamed" in their uses outside of science. I put a copy here (pre publication but accepted and revised):
http://cfkurtz.com/Kurtz_Butterfly_Paper_Revised_FormattedForPrinting.pdf
Anyway, that's just to say that I am active and interested in the complexity area, both as it applies to story work and in general, and I wonder if that might be another point of connection.
Thanks again for reaching out! I'd love to hear more about your work and what you want to do with it in the future. Say hi to Pat from both of us. :)