I finally figured out how to move highlights and annotations taken from personal documents (not items purchased in the Kindle Store) that were made on both my Kindle Paperwhite and Kindle Android app. I am usually carrying my smartphone but still prefer reading on my Paperwhite. There are lots of occasions when I’d like to read on my phone but do not have my Kindle Paperwhite handy.

The steps described in the Readwise tutorials below allow me to read, highlight, and take notes on either device while ensuring that I can still easily export those highlights and notes with the book title, author, and location data to Readwise. Once saved to Readwise, I download them as text notes on my computer and phone. Yay!

How to Export Kindle Highlights (Personal Documents Included) : 6 Steps – Instructables
This little guide has some useful information describing the range of options available for exporting Kindle highlights, including a method I hadn’t noticed to export highlights and notes as an HTML file from within the Kindle Android app.

How do I import highlights by emailing them to Readwise? – Readwise
This tutorial describes a range of options for emailing highlights to Readwise from various sources.

How do I import highlights from documents I sent to my Kindle? – Readwise
This tutorial provides what I really needed: Enumerated steps describing how to export highlights and notes from the Kindle Android app and send to the Readwise email address. And it has a handy GIF depicting the process!

To get a good paper written, you only have to rewrite a good draft; to get a good draft written, you only have to turn a series of notes into a continuous text. And as a series of notes is just the rearrangement of notes you already have in your slip-box, all you really have to do is have a pen in your hand when you read (Ahrens, 2017, p. 74).

Chris Aldrich shared a great idea the other day in the Hypothes.is Liquid Margins Webinar on November 16, 2021Robin DeRosa moderated the discussion and featured instructors who had used Hypothes.is with their students to annotate open educational resources (OER) and in some cases to create OER. The chat was lively, fun, and full of great ideas. One of the best meetings of this kind I’ve attended in a very long time.

Recently, I reread a post from Robin DeRosa about her collaboration with students in the development of an open textbook for her American Literature course. If you have any interest in OER, open textbooks, open pedagogy, Hypothes.is, or Pressbooks, you may be interested in this post. Although I have never undertaken a project like this, I am interested and this post is full of useful information. And it’s also full of “you can do this” encouragement. For a project with so many technical, pedagogical, intellectual property, privacy, and other issues to consider, I find myself appreciating the spirit she brought to her post.

During the webinar, Chris made a suggestion that intrigued me and seemed to build on Robin’s excellent post. I’m paraphrasing his chat but basically, he suggested that an instructor could use the approach described by Sönke Ahrens in his book, How to take smart notes with students, over time and in a distributed manner. If the approach to writing a good manuscript could be broken down into the steps below, could the same approach be applied to the incremental, distributed creation of an open textbook or OER?

  1. Slip-box notes
  2. Series of arranged notes
  3. Continuous text
  4. Draft
  5. Paper

Consider students who are learning about an area for the first time, Mayer’s (2009) Cognitive Theory of Multimedia Learning, for example. Could the creation of an open textbook start with the production of so-called “slip-box notes”?

Another presenter during the webinar explained how she used Hypothes.is with students to highlight topics, tag them using an agreed-upon list or taxonomy, and the provide some justification for the decision to include said item as part of the collection of resources under that topic. Those students could then discuss whether the decision matched their understanding of the topic. When I think of a first step like this, I imagine replacing the discussion forums in my courses with annotation-driven discussions. And I get excited!

There are lots of other great possibilities to explore. And while I am unsure about the prospect of creating an entire open textbook, I feel much more confident that I can engage students in the process of creating simple notes. And I am grateful to Chris, Robin, and the other speakers and commenters during the webinar the other day!

Ahrens, S. (2017). How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking – for Students, Academics and Nonfiction Book Writers (1 edition). CreateSpace Independent Publishing Platform.

Mayer, R. E. (2009). Multimedia Learning (2 edition). Cambridge University Press.

Sonification is the use of non-speech sound in an intentional, systematic way to represent information (Walker & Nees, 2011).

Fascinating Twenty Thousand Hertz podcast, Video(less) Games, in which options for games composed mostly or entirely of sound are described.  Gamers and developers discuss their motivations for contributing and the experience of play.  At about 15:09, you hear about how Steve Saylor, a blind video gamer and game accessibility consultant, describes how he developed a rich series of audio cues that can be enabled.  These cues tell players about environmental features and action in the game.  Listen to hear the experience of the audio layer on and off.

Games composed mostly or entirely of sound are not new–the Twenty Thousand Hertz episode describes a text adventure game called Zork II that utilized a text-to-speech engine in the early 1980s.  But the idea of developing a convention for audio cues within a game or even across multiple games, reminded me of the sonification of math equations I first saw in the Complex Images for All Learners accessibility guide from Portland Community College.  The DIAGRAM Center has a wonderful article on sonification with audio examples that can be played back at different speeds.  Sonification is also not new.  But the provision of multimodal data representations does not seem to be widespread in higher education, at least not that I have seen.

Similar technologies are also being piloted in the realm of traditional sports, such as tennis.  The New York Times published a story by Amanda Morris describing aa new technology called Action Audio that aims to make sports accessible to people with blindness or low vision.  Action Audio converts data–such as data from the 10-12 cameras on an Australian Open tennis court–into 3-D sound in less than a second, allowing that audio to be broadcast alongside live radio commentary.  You can hear an Action Audio sample of an Australian Open tennis match.  To get the full benefit, the use of speakers or headphones with both left and right channels is ideal.

These innovations make me think about the materials that I create or make available to my students.  What would educators need to know to become proficient in the use, evaluation, and creation of multimodal data representations?  In the case of sonification, it might take an educator knowing where to find high-quality sonifications that had already been created.  It might require training in how to produce and design sonifications.  In terms of design, how can our existing base of research and theory help guide our decisions?  These are fascinating questions that I would like to explore more thoroughly and bring back to the courses I teach.

Might be attending Gardens and Streams II: An IndieWebCamp Pop-up Session on Wikis, Digital Gardens, Online Commonplace Books, Zettelkasten and Note Taking
Event Details Date: Saturday, September 25, 2021 Time: 9:00 AM – 1:00 PM Pacific Event page: https://events.indieweb.org/2021/09/gardens-and-streams-ii-pPUbyYME33V4 We’ll discuss and brainstorm ideas related to wikis, commonplace books, digital gardens, zettelkasten, and note taking on personal ...