Abandoned Serendipity Day idea: Valve’s Steam controller as an editor’s CMS controller

Am on vacation. Missed my first NPR Serendipity Day today. Am okay with that — this break is necessary and good — but still felt like throwing down ideas. Like…

I don’t play video games. Not having grown up in a video-game house (beyond Tiger handhelds and a dozen computer games), I’ve never gotten into them. But that doesn’t mean I don’t have a ton of respect for them — and intrigue about how they work. So, Valve’s “wacky Steam controller,” as Gizmodo put it, has fascinated me on its slow move to the market.

On the controller are two trackpads, not joysticks. Valve told Giz why: “The trackpads allow far higher fidelity input than has previously been possible with traditional handheld controllers. Steam gamers, who are used to the input associated with PCs, will appreciate that the Steam Controller’s resolution approaches that of a desktop mouse.”

Within those trackpads, you also get haptic feedback. A Gizmodo follow-up gets into the potential game use: “The idea of that feedback is that it’s supposed to create the feeling that the trackpad has edges on it, perhaps the outlines of buttons, if that’s how it is programmed for a specific game. Those trackpads are sensitive to movement and pressure.”

So, I wonder about the possibilities — for this controller or the concepts behind it — in a news CMS. Something that drives me crazy about media management is that we put so much focus on editing and so little on our distribution. Editing is tremendously important, don’t get me wrong, and better editing tools make for better editing. The less the CMS can get in the way of the editor’s relationship with the words and narrative assets, the better. But we haven’t kept up with managing the overall narrative and the relationship with the audience. The crowd is fluid, and we’re not.

Tools based on motion, like the Steam controller, are thus so interesting to me. Especially when you consider how the controller works on precise movements (the keyboard and mouse) rather than general and physical (joystick). Until we build The Son of VRML and until 3-D suddenly makes (another) digital comeback, joysticks have no place in a CMS. But precision movements based on real-time feedback, whether from the news audience or system-managed available routes of distribution? Now you’re talking.

Given the increasing competition in the marketplace, the print front-page editor or the magazine editor-in-chief needs to enter digital newsrooms more than ever. These editors have always been first and foremost about feel and placement and audience flow. MOTION. Let this editor choose where stories go across platforms and products, make high-level choices based on known and potential audience reactions and send the unworthy or almost-there content back to the line editors and producers for rework.

What could tools based around movement give this editor? Add a headset, and I think the movement spectrum would be wide open. Would there be shortcomings? Absolutely. No doubt about it. All involved (including me) would be terrified at first to have a top editor working without a keyboard.

But I’d also be so damn excited. In a traditional-media scenario where the workflow runs hard in one direction, it’s a blessing and curse of top editors — often promoted in newsrooms for their decision-making abilities — that they so often lose themselves in copy, fixing others’ content shortcomings with their promoted judgment rather than directing the other people to do so themselves. Workflow has dictated the other people to go home by now anyway. But in digital publishing, in our digital flows, the real-time nature means all involved bodies are often still in the house and directives — and clarities of directives — can arrive and have opportunity for quick action. The amount of content that many newsrooms are processing has also increased so much. An audience editor can’t afford to lose her/himself redoing the copy. The audience demands more copy and more attention.

I’m not arguing for this editor to ignore the copy and promote mindlessly; I’m just saying we have to think hard about the division of labor — and put the effort and trust into smart divisions of labor to work. Scrum-master — daily mediate — such a process as necessary. Do what you need to make it hold up. But at least one editor needs to be thinking about motion, about routing, about distribution, about where the content meets the audience.

Having different technologies in hand could be valuable. Could be! Or not. Maybe the keyboard and mouse are more than enough, perfectly sufficient for the developing digital space and its challenges. But my gut says they’re not. My gut says that if you want motion in your content flow, you have to commit to motion. Damn the torpedoes, take a few chances, mediate and Agile-retrospect as you go, iterate as you learn, and full speed ahead.

Sincerely,
Serendipity Day Patrick

Abandoned Serendipity Day idea: Responsive design as interpretive dance

Am on vacation. Missed my first NPR Serendipity Day today. Am okay with that — this break is necessary and good — but still felt like throwing down ideas. Like…

This one, considered briefly for a Serendipity Day a year or so ago. Rediscovered in old notebook. Not sure how awake/sober I was when I wrote this. But I like it.

Dancers stand in a grid formation. Dancers each begin crouched in a small, compact form, then unfurl themselves. Then they fold back to a compact form. Then they unfurl. Then go back down. Then unfurl. Everyone briefly looks quizzical, unsure. Dancers form into two groups at left and right, at a range of heights from standing to kneeling, and they mesh arms to form a rectangle and assume a relaxed pose, facing away from the audience. From top to bottom, they spin to audience, excited. Dancers standing toward the back step out from behind first person on their side and fill the space.

And that’s all of my notes! I assume more follows. Append moves as you like!

Abandoned Serendipity Day idea: Radio based on where quiet and darkness take you

Am on vacation. Missed my first NPR Serendipity Day today. Am okay with that — this break is necessary and good — but still felt like dreaming up ideas. Like…

We challenge people to sit in the chamber in the dark – one person stayed in there for 45 minutes. When it’s quiet, ears will adapt. The quieter the room, the more things you hear. You’ll hear your heart beating, sometimes you can hear your lungs, hear your stomach gurgling loudly. In the anechoic chamber, you become the sound.

Earth’s Quietest Place Will Drive You Crazy in 45 Minutes” could make for a fun Serendipity Day project. You’d have to find the most soundproof room at NPR, block it off from visitors or other interruptions and sit there for some extended amount of time. Would it be as quiet as the Earth’s Quietest Place? Probably not, likely nowhere close. But I wonder what’s it’s like to use a place made for generating sound instead for generating silence and then seeing where that takes someone mentally. It could be an interesting exercise for someone who regularly has a lot of sound coming his or her way, such as, say, a senior product manager for NPR digital.

Consider, too, the effect of darkness. I’m envious of people who’ve had the music-listening experience described recently in The New Yorker:

The opening minutes of Georg Friedrich Haas’s Third String Quartet, which unfolds in total darkness and can last more than an hour, are so unsettling that the members of the JACK Quartet … prep the audience with a “practice run” beforehand. The lights are turned off briefly, and anyone who feels too uncomfortable with the plunge into pitch-blackness can leave before the music begins. Occasional adverse reactions are understandable: it’s like being buried alive. 

How would a radio program (digital, not broadcast) based around silence or darkness — emptiness or fullness, depending on your interpretation — sound? People often to listen to NPR as they’re doing other things, like (for broadcast) driving a car or (listening digitally) jogging or whatnot. What if there were a show based on doing nothing but listening and that show focused on silence and where your mind went during silences?

I’m talking about the gaps A Visit from the Good Squad explains so well.

Could such a production actually be good for you, detaching you briefly from a world of noise and hearing out your inner voices and more original thoughts? And what might the physical implications be? “The sound of silence is music to the heart,” one study says. (Googling such studies cost me a well-spent half-hour the other day.) To test in part (and to try some new tech), you could watch your heart rate through an app like this one.

Obviously, a program like this one could never happen on broadcast. Over-the-air needs to stay welcoming to radio-dial scanners (like me) and has to meet certain underwriting requirements for all involved. Lengthy silences and not allowing underwriting every 20 minutes would be non-starters.

Subscribed experiences through podcasting or similar delivery methods (Apple TV) could be interesting for audience, though. This differentiation might also allow for unique sponsorship as well. Would people subscribe to such a franchise? I don’t know. But it would be fun for a day to imagine.

Nothing wrong with waiting on an idea

A years and a quarter ago, I chased an idea for Serendipity Day that didn’t pan out. Last weekend, I came across my middle-of-the-night notes.

We typically think of digital “lean-forward” vs. “lean-back” experiences as interaction vs. consumption. But what about voice?

Voice is interaction, but there are different kinds of interaction. Just like there are different kinds of consumption. Like — lean forward and lean back. The phone is a good example of how voice can be both. You can feel the difference between lean-forward and lean-back conversations.

What does lean-back mean for voice?

Observing, thinking, contemplative expression. A voice sounding much more like NPR than Siri’s leanforward style. Different than Ford’s SYNC.

Does lean-back audio interaction exist? If so, how do we nurture it?

I tried Google’s voice tech the next day and didn’t find enough options to make an experiment interesting. Couldn’t lean back and get much done. This spring, though, I was glad to go back to the idea and find so many more toys. Years ago, I felt that when an idea arrived, it had to happen right then or it would never be real. Happy to grow up and find patience.

Posting this now as a reminder to myself… and to anyone else who might run across it, anyone who wonders when something sought might arrive.

And to remind myself to have patience.

Playing with Chrome audio for Serendipity Day

seren-sign

This week brought Serendipity Day again at NPR. It was a blast as usual. Coworkers did all kinds of cool stuff: data mash-ups, design explorations, and much more. Especially great was seeing Google Glass prototyping…

seren-googleglass

For my project, I dug into some developing Google audio APIs. I had first looked into them for a Serendipity Day a year or so ago but found them not far enough along to support longer-form audio concepts. But Google continued to develop them, and by this spring they were in better shape.

To see my demo — and try it out for yourself — open this page in new browser window. You’re going to need the latest version of Chrome (27), a microphone and your volume on. Put this page, particularly the text below, side by side with the demo page on your screen(s). Then, on the demo page, click the microphone, click to allow access and start reading aloud.

Hopefully, various things will happen when you say the words in bold.

There is a 100% chance the technology in this demo will screw up. I’m going to talk slowly and try to enunciate more than usual for this demo. Let’s see how it goes. New paragraph.

So, for Serendipity Day, I explored Google’s voice APIs using code snippets from around the Web. The latest big thing is voice search coming to Chrome, but I’ve been playing with three quieter feature enhancements.

First thing: Real-time transcription — at length. Siri cuts you off. Chrome doesn’t. I can keep talking and talking, and, though the transcription technology makes mistakes, it keeps up with me.

As the technology improves, you can imagine the possibilities for NPR. Real-time text for the hearing impaired. Real-time breaking-news experiences with triggers based on language. Add OpenCalais and other APIs, and a location could trigger a map. Or a name a set of headlines.

Second thing: Real-time audio translation. Let’s say I want Spanish. That’s possible. Chrome translates on the page as I talk. Or French. I bet Robert Siegel reads great in French. I don’t read French, so I have no idea how many mistakes it’s making right now. But still.

Third thing. Recording audio in the browser. [Must click to allow microphone use again.] Imagine it. Spoken responses to news. StoryCorps potential in your home. Tiny Desk karaoke, mashing up your audio with our video on the back-end.

[Last trigger should play the recorded audio.]

The technology is deeply imperfect. But it has NPR potential.

How’d it go? I’m guessing part of the demo worked and part didn’t. That’s about right. Via Javaun, here I am below with part working and part not.

seren-me

What I did was mash-up three code samples: the Google Web Speech API demo (with further details from Google engineer Glen Shires here), Eric Bidelman’s Speech Translate demo (from the just-a-week-ago Google I/O conference, via Lifehacker), and Matt Diamond’s Web Audio API recorder.js example (with helpful background from Google’s Chris Wilson). I stripped away what I didn’t need, made them talk to each other with Javascript (somewhat) and added keyword triggers (if-then Javascript interpretations of the text the scripting passes around) to indicate the API possibilities.

I’m not a coder, so the resulting code isn’t anywhere near elegant. The page is probably processing way more than it needs to, and occasionally the recording part gets in an access-request loop. Also, the recording part has arrived in full non-beta Chrome just this week. The feature’s worked well on my home MacBook Air, but it’s never worked on my Windows XP machine at the office. (Same with this cool whistle-pitch detector.)

At yesterday’s demo, everything worked except for the recording part, which hit the bad access loop. Despite the loop, it was a good time. As always, I liked the time to try something new, make mistakes, learn a million things, and explore the almost-but-not-quite-yet possibilities.

My spring Serendipity hour: Dew, new and news

A few times a year, everyone in Digital Media gets a day or so to work on any idea they feel might help NPR. No meetings are held. Everyone pushes themselves in different directions. There is a prize given for biggest failure.

We call it Serendipity Day. For the first one, I chased new CMS ideas. For the second, I explored a type of human-centered design. In the third, I taught myself some PHP and API coding. For the fourth Serendipity Day, this time around, things ran off the rails a bit. The period collided with a major project launch, and several of us lost most of our Serendipity time. 

The project was worth it, no doubt. But missing out on awesome creative think time was a bummer. So, I made the best of the free hour I had. For my three-minute demo (we present to each other at the end), I talked. 

Life has been busy recently. Lots of projects, emails, meetings. I didn’t have much time the past couple days to get Serendipitous.

So, unhappy yesterday morning, I picked up a book from my coffee table that I hadn’t read before. I opened it to a random page and promised myself I’d talk about whatever that page taught me about what we do at NPR.

The page had a poem.

Kay Ryan, “Dew”

As neatly as peas
in their green canoe,
as discreetly as beads
strung in a row,
sit drops of dew
along a blade of grass.
But unattached and
subject to their weight,
they slip if they accumulate.
Down the green tongue
out of the morning sun
into the general damp,
they’re gone.

What the poem made me think about was the way our digital stories meet the world.

Consider how dew forms. In spring, the sun heats Earth’s surfaces. At night, the surfaces release heat in the form of water vapor. The vapor condenses, and dew drops form on the grass.

Eventually, the sun comes up. The drops evaporate. The cycle repeats.

News storytelling is similar.

NPR takes in the world’s heat. Our journalism warms until we find the right moment to release. We get cooler than the rest of the world. Our reporting hits the air, and stories form on the surfaces around us. The stories are noticeable for a while. Then the day burns them away.

Or they slip into the general damp.

If you buy this comparison, you start thinking. About the heat of information, about publishing, about the delicacy of a new story.

Our expectation for newness these days is low.

Discovery is fierce competition. We acquiesce to the idea that everything we see someone else has seen before. We give up on “new.” We settle for “new-to-me.”

But this problem is a good challenge for NPR. Even if someone else has seen an NPR story before you, how do we imbue that story with a newness that sticks?

We’re off to a decent start. New is clean. We love white space. New is different. We cover stories no one else covers, a newness that creates engagement and pageviews.

But we also have work to do. New is pure. We’ve only begun to simplify our layouts. New is fresh. We’re slow on trending topics. New is dewy. We can be dry.

That’s what I’m taking away today — the work of newness ahead here at NPR. But also that new is unburdened. New doesn’t have projects.

New doesn’t have emails and tickets. New doesn’t have a backlog. In order to preserve new for others, to remind our audience of what new feels like, we have preserve it for ourselves.

We have to remind ourselves, even if busyness takes over just about all of our Serendipity Day, to pick up a book on our coffee table, open to a random page and turn the spring heat into something new.

Much Serendipity Day code pain and learning


I built this! It was little but made me happy. Grab Story was the API part.

So…. Code Year!

Ever since friend Casey attempted Code Year and nearly drove himself over the edge, mentions of the start-programming-in-2012 idea have popped up regularly in my streams. I haven’t observed a formal Code Year, but I’ve been coding more this year than I have in a long time.

The last time I coded anything more than HTML and CSS from scratch was probably Mr. Blair’s computer-science class, my junior year of high school. The last time I wrenched apart and reassembled other people’s scripts was senior year of college and the couple years after. Life, at a certain point, became more about working with coders than being one. I really came to like working with news coders, seeing their challenges.

But as you know, at work we have something called Serendipity Day.

A few times throughout the year in our Digital Media division, we stop all projects, block all calendars and work a day on whatever we feel might benefit NPR. For the first Serendipity Day, last spring, I teamed with my UX colleague Vince. We came up with a new way of running the NPR homepage. Our team coded and implemented that tool in December, which was cool. For the second Serendipity Day, I applied human-centered design to NPR digital consumption and time-of-day factors. For the third, most recent Serendipity Day, I decided to code.

I worked all the time with PHP developers on our publishing systems (aka storytelling tools aka content-management system), so I decided to learn PHP. I’d never looked at PHP code beyond a little WordPress.

For a problem to solve, I decided to tackle the NPR homepage’s blog rotator. The three-frame piece, in the top-right of the page, promoted our blogs well. But it was awful to maintain. For each frame, an editor had to enter the post’s headline, the post URL in two places, the main post image, the blog’s logo, the blog’s overall URL in two places, and the blog’s tagline. And the editor had to do all this by hand in HTML.

This problem wasn’t NPR’s biggest or most complicated, by far. A coder could have handled it easily. But it was well-sized for me to learn on.

So.

The rotator had three frames, but each frame had two levels of data. For each frame, there was the post’s metadata, changing each time. Each post was then tied to an NPR blog: a finite set of blog metadata.

In tackling this problem, my main goal was workflow efficiency (sanity). To learn, though, my secondary goal was getting to efficiency in a way that built on existing content data while still allowing editorial flexibility.

So. I started small. Then I worked my way out. Here were my steps.

Continue reading Much Serendipity Day code pain and learning