SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: combjelly who wrote (586292)9/22/2010 11:43:17 AM
From: bentway  Respond to of 1573406
 
Sizing Up Consciousness by Its Bits

By CARL ZIMMER
nytimes.com
( If we learn what makes us conscious, can we design machines that are? )

One day in 2007, Dr. Giulio Tononi lay on a hospital stretcher as an anesthesiologist prepared him for surgery. For Dr. Tononi, it was a moment of intellectual exhilaration. He is a distinguished chair in consciousness science at the University of Wisconsin, and for much of his life he has been developing a theory of consciousness. Lying in the hospital, Dr. Tononi finally had a chance to become his own experiment.

The anesthesiologist was preparing to give Dr. Tononi one drug to render him unconscious, and another one to block muscle movements. Dr. Tononi suggested the anesthesiologist first tie a band around his arm to keep out the muscle-blocking drug. The anesthesiologist could then ask Dr. Tononi to lift his finger from time to time, so they could mark the moment he lost awareness.

The anesthesiologist did not share Dr. Tononi’s excitement. “He could not have been less interested,” Dr. Tononi recalled. “He just said, ‘Yes, yes, yes,’ and put me to sleep. He was thinking, ‘This guy must be out of his mind.’ ”

Dr. Tononi was not offended. Consciousness has long been the province of philosophers, and most doctors steer clear of their abstract speculations. After all, debating the finer points of what it is like to be a brain floating in a vat does not tell you how much anesthetic to give a patient.

But Dr. Tononi’s theory is, potentially, very different. He and his colleagues are translating the poetry of our conscious experiences into the precise language of mathematics. To do so, they are adapting information theory, a branch of science originally applied to computers and telecommunications. If Dr. Tononi is right, he and his colleagues may be able to build a “consciousness meter” that doctors can use to measure consciousness as easily as they measure blood pressure and body temperature. Perhaps then his anesthesiologist will become interested.

“I love his ideas,” said Christof Koch, an expert on consciousness at Caltech. “It’s the only really promising fundamental theory of consciousness.”

Dr. Tononi’s obsession with consciousness started in his teens. He was initially interested in ethics, but he decided that questions of personal responsibility depended on our consciousness of our own actions. So he would have to figure out consciousness first. “I’ve been stuck with this thing for most of my life,” he said.

Eventually he decided to study consciousness by becoming a psychiatrist. An early encounter with a patient in a vegetative state convinced Dr. Tononi that understanding consciousness was not just a matter of philosophy.

“There are very practical things involved,” Dr. Tononi said. “Are these patients feeling pain or not? You look at science, and basically science is telling you nothing.”

Dr. Tononi began developing models of the brain and became an expert on one form of altered consciousness we all experience: sleep. In 2000, he and his colleagues found that Drosophila flies go through cycles of sleeping and waking. By studying mutant flies, Dr. Tononi and other researchers have discovered genes that may be important in sleep disorders.

For Dr. Tononi, sleep is a daily reminder of how mysterious consciousness is. Each night we lose it, and each morning it comes back. In recent decades, neuroscientists have built models that describe how consciousness emerges from the brain. Some researchers have proposed that consciousness is caused by the synchronization of neurons across the brain. That harmony allows the brain to bring together different perceptions into a single conscious experience.

Dr. Tononi sees serious problems in these models. When people lose consciousness from epileptic seizures, for instance, their brain waves become more synchronized. If synchronization were the key to consciousness, you would expect the seizures to make people hyperconscious instead of unconscious, he said.

While in medical school, Dr. Tononi began to think of consciousness in a different way, as a particularly rich form of information. He took his inspiration from the American engineer Claude Shannon, who built a scientific theory of information in the mid-1900s. Mr. Shannon measured information in a signal by how much uncertainty it reduced. There is very little information in a photodiode that switches on when it detects light, because it reduces only a little uncertainty. It can distinguish between light and dark, but it cannot distinguish between different kinds of light. It cannot tell the differences between a television screen showing a Charlie Chaplin movie or an ad for potato chips. The question that the photodiode can answer, in other words, is about as simple as a question can get.

Our neurons are basically fancy photodiodes, producing electric bursts in response to incoming signals. But the conscious experiences they produce contain far more information than in a single diode. In other words, they reduce much more uncertainty. While a photodiode can be in one of two states, our brains can be in one of trillions of states. Not only can we tell the difference between a Chaplin movie and a potato chip, but our brains can go into a different state from one frame of the movie to the next.

“One out of two isn’t a lot of information, but if it’s one out of trillions, then there’s a lot,” Dr. Tononi said.

Consciousness is not simply about quantity of information, he says. Simply combining a lot of photodiodes is not enough to create human consciousness. In our brains, neurons talk to one another, merging information into a unified whole. A grid made up of a million photodiodes in a camera can take a picture, but the information in each diode is independent from all the others. You could cut the grid into two pieces and they would still take the same picture.

Consciousness, Dr. Tononi says, is nothing more than integrated information. Information theorists measure the amount of information in a computer file or a cellphone call in bits, and Dr. Tononi argues that we could, in theory, measure consciousness in bits as well. When we are wide awake, our consciousness contains more bits than when we are asleep.

For the past decade, Dr. Tononi and his colleagues have been expanding traditional information theory in order to analyze integrated information. It is possible, they have shown, to calculate how much integrated information there is in a network. Dr. Tononi has dubbed this quantity phi, and he has studied it in simple networks made up of just a few interconnected parts. How the parts of a network are wired together has a big effect on phi. If a network is made up of isolated parts, phi is low, because the parts cannot share information.

But simply linking all the parts in every possible way does not raise phi much. “It’s either all on, or all off,” Dr. Tononi said. In effect, the network becomes one giant photodiode.

Networks gain the highest phi possible if their parts are organized into separate clusters, which are then joined. “What you need are specialists who talk to each other, so they can behave as a whole,” Dr. Tononi said. He does not think it is a coincidence that the brain’s organization obeys this phi-raising principle.

Dr. Tononi argues that his Integrated Information Theory sidesteps a lot of the problems that previous models of consciousness have faced. It neatly explains, for example, why epileptic seizures cause unconsciousness. A seizure forces many neurons to turn on and off together. Their synchrony reduces the number of possible states the brain can be in, lowering its phi.

Dr. Koch considers Dr. Tononi’s theory to be still in its infancy. It is impossible, for example, to calculate phi for the human brain because its billions of neurons and trillions of connections can be arranged in so many ways. Dr. Koch and Dr. Tononi recently started a collaboration to determine phi for a much more modest nervous system, that of a worm known as Caenorhabditis elegans. Despite the fact that it has only 302 neurons in its entire body, Dr. Koch and Dr. Tononi will be able make only a rough approximation of phi, rather than a precise calculation.

“The lifetime of the universe isn’t long enough for that,” Dr. Koch said. “There are immense practical problems with the theory, but that was also true for the theory of general relativity early on.”

Dr. Tononi is also testing his theory in other ways. In a study published this year, he and his colleagues placed a small magnetic coil on the heads of volunteers. The coil delivered a pulse of magnetism lasting a tenth of a second. The burst causes neurons in a small patch of the brain to fire, and they in turn send signals to other neurons, making them fire as well.

To track these reverberations, Dr. Tononi and his colleagues recorded brain activity with a mesh of scalp electrodes. They found that the brain reverberated like a ringing bell, with neurons firing in a complex pattern across large areas of the brain for 295 milliseconds.

Then the scientists gave the subjects a sedative called midazolam and delivered another pulse. In the anesthetized brain, the reverberations produced a much simpler response in a much smaller region, lasting just 110 milliseconds. As the midazolam started to wear off, the pulses began to produce richer, longer echoes.

These are the kinds of results Dr. Tononi expected. According to his theory, a fragmented brain loses some of its integrated information and thus some of its consciousness. Dr. Tononi has gotten similar results when he has delivered pulses to sleeping people — or at least people in dream-free stages of sleep.

In this month’s issue of the journal Cognitive Neuroscience, he and his colleagues reported that dreaming brains respond more like wakeful ones. Dr. Tononi is now collaborating with Dr. Steven Laureys of the University of Liège in Belgium to test his theory on people in persistent vegetative states. Although he and his colleagues have tested only a small group of subjects, the results are so far falling in line with previous experiments.

If Dr. Tononi and his colleagues can get reliable results from such experiments, it will mean more than just support for his theory. It could also lead to a new way to measure consciousness. “That would give us a consciousness index,” Dr. Laureys said.

Traditionally, doctors have measured consciousness simply by getting responses from patients. In many cases, it comes down to questions like, “Can you hear me?” This approach fails with people who are conscious but unable to respond. In recent years scientists have been developing ways of detecting consciousness directly from the activity of the brain.

In one series of experiments, researchers put people in vegetative or minimally conscious states into fMRI scanners and asked them to think about playing tennis. In some patients, regions of the brain became active in a pattern that was a lot like that in healthy subjects.

Dr. Tononi thinks these experiments identify consciousness in some patients, but they have serious limitations. “It’s complicated to put someone in a scanner,” he said. He also notes that thinking about tennis for 30 seconds can demand a lot from people with brain injuries. “If you get a response I think it’s proof that’s someone’s there, but if you don’t get it, it’s not proof of anything,” Dr. Tononi said.

Measuring the integrated information in people’s brains could potentially be both easier and more reliable. An anesthesiologist, for example, could apply magnetic pulses to a patient’s brain every few seconds and instantly see whether it responded with the rich complexity of consciousness or the meager patterns of unconsciousness.

Other researchers view Dr. Tononi’s theory with a respectful skepticism.

“It’s the sort of proposal that I think people should be generating at this point: a simple and powerful hypothesis about the relationship between brain processing and conscious experience,” said David Chalmers, a philosopher at Australian National University. “As with most simple and powerful hypotheses, reality will probably turn out to be more complicated, but we’ll learn something from the attempt. I’d say that it doesn’t solve the problem of consciousness, but it’s a useful starting point.”

Dr. Tononi acknowledged, “The theory has to be developed a bit more before I worry about what’s the best consciousness meter you could develop.” But once he has one, he would not limit himself to humans. As long as people have puzzled over consciousness, they have wondered whether animals are conscious as well. Dr. Tononi suspects that it is not a simple yes-or-no answer. Rather, animals will prove to have different levels of consciousness, depending on their integrated information. Even C. elegans might have a little consciousness.

“Unless one has a theory of what consciousness is, one will never be able to address these difficult cases and say anything meaningful,” Dr. Tononi said.



To: combjelly who wrote (586292)9/22/2010 12:26:17 PM
From: bentway  Respond to of 1573406
 
The Pen That Never Forgets

By CLIVE THOMPSON
nytimes.com

In the spring, Cincia Dervishaj was struggling with a take-home math quiz. It was testing her knowledge of exponential notation — translating numbers like “3.87 x 10²” into a regular form. Dervishaj is a 13-year-old student at St. John’s Lutheran School in Staten Island, and like many students grappling with exponents, she got confused about where to place the decimal point. “I didn’t get them at all,” Dervishaj told me in June when I visited her math class, which was crowded with four-year-old Dell computers, plastic posters of geometry formulas and a big bowl of Lego bricks.

To refresh her memory, Dervishaj pulled out her math notebook. But her class notes were not great: she had copied several sample problems but hadn’t written a clear explanation of how exponents work.

She didn’t need to. Dervishaj’s entire grade 7 math class has been outfitted with “smart pens” made by Livescribe, a start-up based in Oakland, Calif. The pens perform an interesting trick: when Dervishaj and her classmates write in their notebooks, the pen records audio of whatever is going on around it and links the audio to the handwritten words. If her written notes are inadequate, she can tap the pen on a sentence or word, and the pen plays what the teacher was saying at that precise point.

Dervishaj showed me how it works, flipping to her page of notes on exponents and tapping a set of numbers in the middle of the page. Out of a tiny speaker in the thick, cigar-shaped pen, I could hear her teacher, Brian Licata, explaining that precise problem. “It’s like having your own little personal teacher there, with you at all times,” Dervishaj said.

Having a pen that listens, the students told me, has changed the class in curious ways. Some found the pens make class less stressful; because they don’t need to worry about missing something, they feel freer to listen to what Licata says. When they do take notes, the pen alters their writing style: instead of verbatim snippets of Licata’s instructions, they can write “key words” — essentially little handwritten tags that let them quickly locate a crucial moment in the audio stream. Licata himself uses a Livescribe pen to provide the students with extra lessons. Sitting at home, he’ll draw out a complicated math problem while describing out loud how to solve it. Then he’ll upload the result to a class Web site. There his students will see Licata’s handwriting slowly fill the page while hearing his voice explaining what’s going on. If students have trouble remembering how to tackle that type of problem, these little videos — “pencasts” — are online 24 hours a day. All the students I spoke to said they watch them.

LIKE MOST PIECES of classroom technology, the pens cause plenty of digital-age hassles. They can crash. The software for loading students’ notes onto their computers or from there onto the Web can be finicky. And the pens work only with special notepaper that enables the pen to track where it’s writing; regular paper doesn’t work. (Most students buy notepads from Livescribe, though it’s possible to print the paper on a color printer.) There are also some unusual social side-effects. The presence of so many recording devices in the classroom creates a sort of panopticon — or panaudiocon, as it were. Dervishaj has found herself whispering to her seatmate, only to realize the pen was on, “so we’re like, whoa!” — their gossip has been recorded alongside her notes. Although you can pause a recording, there’s currently no way to selectively delete a few seconds of audio from the pen, so she’s forced to make a decision: Delete all the audio for that lesson, or keep it in and hope nobody else ever hears her private chatter. She usually deletes.

Nonetheless, Licata is a convert. As the students started working quietly on review problems, their pens making tiny “boop” noises as the students began or paused their recording, Licata pulled me aside to say the pens had “transformed” his class. Compact and bristling with energy, Licata is a self-professed geek; in his 10 years of teaching, he has seen plenty of classroom gadgets come and go, from Web-based collaboration software to pricey whiteboards that let children play with geometric figures the way they’d manipulate an iPhone screen. Most of these gewgaws don’t impress him. “Two or three times a year teachers whip out some new technology and use it, but it doesn’t do anything better and it’s never seen again,” he said.

But this time, he said, was different. This is because the pen is based on an age-old classroom technique that requires no learning curve: pen-and-paper writing. Livescribe first released the pen in 2008; Licata encountered it when a colleague brought his own to work. Intrigued, he persuaded Livescribe to donate 20 pens to the school to outfit his entire class. (The pens sell for around $129.) “I’ve made more gains with this class this year than I’ve made with any class,” he told me. In his evenings, Licata is pursuing a master’s degree in education; separately, he intends to study how the smart pens might affect the way students learn, write and think. “Two years ago I would have told you that note-taking is a lost art, that handwriting was a lost art,” he said. “But now I think handwriting is crucial.”

TAKING NOTES HAS long posed a challenge in education. Decades of research has found a strong correlation between good notes and good grades: the more detailed and accurate your notes, the better you do in school. That’s partly because the act of taking notes forces you to pay closer attention. But what’s more important, according to some researchers, is that good notes provide a record: most of the benefits from notes come not from taking them but from reviewing them, because no matter how closely we pay attention, we forget things soon after we leave class. “We have feeble memories,” says Ken Kiewra, a professor of educational psychology at the University of Nebraska and one of the world’s leading researchers into note-taking.

Yet most students are very bad at taking notes. Kiewra’s research has found that students record about a third of the critical information they hear in class. Why? Because note-taking is a surprisingly complex mental activity. It heavily taxes our “working memory” — the volume of information we can consciously hold in our heads and manipulate. Note-taking requires a student to listen to a teacher, pick out the most important points and summarize and record them, while trying not to lose the overall drift of the lecture. (The very best students do even more mental work: they blend what they’re hearing with material they already know and reframe the concepts in their own words.) Given how jampacked this task is, “transcription fluency” matters: the less you have to think about the way you’re recording notes, the better. When you’re taking notes, you want to be as fast and as automatic as possible.

All note-taking methods have downsides. Handwriting is the most common and easiest, but a lecturer speaks at 150 to 200 words per minute, while even the speediest high-school students write no more than 40 words per minute. The more you struggle to keep up, the more you’re focusing on the act of writing, not the act of paying attention.

Typing can be much faster. A skilled typist can manage 60 words a minute or more. And notes typed into a computer have other advantages: they can be quickly searched (unlike regular handwritten notes) and backed up or shared online with other students. They’re also neater and thus easier to review. But they come with other problems, not least of which is that typing can’t capture the diagrammatic notes that classes in math, engineering or biology often require. What’s more, while personal computers and laptops may be common in college, that isn’t the case in cash-strapped high schools. Laptops in class also bring a host of distractions — from Facebook to Twitter — that teachers loathe. And students today are rarely taught touch typing; some note-taking studies have found that students can be even slower at typing than at handwriting.

One of the most complete ways to document what is said in class is to make an audio record: all 150-plus words a minute can be captured with no mental effort on the part of the student. Kiewra’s research has found that audio can have a powerful effect on learning. In a 1991 experiment, he had four groups of students listen to a lecture. One group was allowed to listen once, another twice, the third three times and the fourth was free to scroll back and forth through the recording at will, listening to whatever snippets the students wanted to review. Those who relistened were increasingly likely to write down crucial “secondary” ideas — concepts in a lecture that add nuance to the main points but that we tend to miss when we’re focused on writing down the core ideas. And the students who were able to move in and out of the audio stream performed as well as those who listened to the lecture three times in a row. (Students who recorded more secondary ideas also scored higher in a later quiz.) But as anyone who has tried to scroll back and forth through an audio file has discovered, reviewing audio is frustrating and clumsy. Audio may be richer in detail, but it is not, like writing and typescript, skimmable.

JIM MARGGRAFF, the 52-year-old inventor of the Livescribe pen, has a particular knack for blending audio and text. In the ’90s, appalled by Americans’ poor grasp of geography, he invented a globe that would speak the name of any city or country when you touched the location with a pen. In 1998, his firm was absorbed by Leapfrog, the educational-toy maker, where Marggraff invented toys that linked audio to paper. His first device, the LeapPad, was a book that would speak words and play other sounds whenever a child pointed a stylus at it. It quickly became Leapfrog’s biggest hit.

In 2001, Marggraff was browsing a copy of Wired magazine when he read an article about Anoto, a Swedish firm that patented a clever pen technology: it imprinted sheets of paper with tiny dots that a camera-equipped pen could use to track precisely where it was on any page. Several firms were licensing the technology to create pens that would record pen strokes, allowing users to keep digital copies of whatever they wrote on the patterned paper. But Marggraff had a different idea. If the pen recorded audio while it wrote, he figured, it would borrow the best parts from almost every style of note-taking. The audio record would help note-takers find details missing from their written notes, and the handwritten notes would serve as a guide to the audio record, letting users quickly dart to the words they wanted to rehear. Marggraff quit Leapfrog in 2005 to work on his new idea, and three years later he released the first Livescribe pen. He has sold close to 500,000 pens in the last two years, mostly to teachers, students and businesspeople.

I met Marggraff in his San Francisco office this summer. He and Andrew Van Schaack, a professor in the Peabody College of Education at Vanderbilt University and Livescribe’s science adviser, explained that the pen operated, in their view, as a supplement to your working memory. If you’re not worried about catching every last word, you can allocate more of your attention to processing what you’re hearing.

“I think people can be more confident in taking fewer notes, recognizing that they can go back if there’s something important that they need,” Van Schaack said. “As a teacher, I want to free up some cognitive ability. You know that little dial on there, your little brain tachometer? I want to drop off this one so I can use it on my thinking.” Marggraff told me Livescribe has surveyed its customers on how they use the pen. “A lot of adults say that it helps them with A.D.H.D.,” he said. “Students say: ‘It helps me improve my grades in specific classes. I can think and listen, rather than writing.’ They get more confident.”

Livescribe pens often inspire proselytizing among users. I spoke to students at several colleges and schools who insisted that the pen had improved their performance significantly; one swore it helped boost his G.P.A. to 3.9 from 3.5. Others said they had evolved highly personalized short notations — even pictograms — to make it easier to relocate important bits of audio. (Whenever his professor reeled off a long list of facts, one student would simply write “LIST” if he couldn’t keep up, then go back later to fill in the details after class.) A few students pointed to the handwriting recognition in Livescribe’s desktop software: once an individual user has transferred the contents of a pen to his or her computer, the software makes it possible to search that handwriting — so long as it’s reasonably legible — by keyword. That, students said, markedly sped up studying for tests, because they could rapidly find notes on specific topics. The pen can also load “apps”: for example, a user can draw an octave of a piano keyboard and play it (with the notes coming out of the pen’s speaker), or write a word in English and have the pen translate it into Spanish on the pen’s tiny L.E.D. display.

Still, it’s hard to know whether Marggraff’s rosiest ambitions are realistic. No one has yet published independent studies testing whether the Livescribe style of enhanced note-taking seriously improves educational performance. One of the only studies thus far is by Van Schaack himself. In the spring, he conducted an unpublished experiment in which he had 40 students watch a video of a 30-minute lecture on primatology. The students took notes with a Livescribe pen, and were also given an iPod with a recording of the lecture. Afterward, when asked to locate specific facts on both devices, the students were 2.5 times faster at retrieving the facts on the pen than on the iPod. It was, Van Schaack argues, evidence that the pen can make an audio stream genuinely accessible, potentially helping students tap into those important secondary ideas that we miss when we’re scrambling to write solely by hand.

Marggraff suspects the deeper impact of the pen may not be in taking notes when you’re listening to someone else, but when you’re alone — and thinking through a problem by yourself. For example, he said, a book can overwhelm a reader with thoughts. “You’re going to get ideas like crazy when you’re reading,” Marggraff says. “The issue is that it’s too slow to sit down and write them” — but if you don’t record them, you’ll usually forget them. So when Marggraff is reading a book at home or even on a plane, he’ll pull out his pen, hit record and start talking about what he’s thinking, while jotting down some keywords. Later on, when he listens to the notes, “it’s just astounding how relevant it is, and how much value it brings.” No matter how good his written notes are, audio includes many more flashes of insight — the difference between the 30 words per minute of his writing and the 150 minutes per word of his speech, as it were.

Marggraff pulls out his laptop to show me notes he took while reading Malcolm Gladwell’s book “Outliers.” The notes are neat and legible, but the audio is even richer; when he taps on the middle of the note, I can hear his voice chattering away at high speed. When he listens to the notes, he’ll often get new ideas, so he’ll add notes, layering analysis on top of analysis.

“This is game-changing,” he says. “This is a dialogue with yourself.” He has used the technique to brainstorm patent ideas for hours at a time.

Similarly, in his class at St. John’s, Licata has found the pen is useful in capturing the students’ dialogues with themselves. For instance, he asks his students to talk to their pens while they do their take-home quizzes, recording their logic in audio. That way, if they go off the rails, Licata can click through the page to hear what, precisely, went wrong and why. “I’m actually able to follow their train of thought,” he says.

Some experts have doubts about Livescribe as a silver bullet. As Kiewra points out, plenty of technologies in the past have been hailed as salvations of education. “There’s been the radio, there’s been the phonograph, moving pictures, the VCR” — and, of course, the computer. But the average student’s note-taking ability remains as dismal as ever. Kiewra says he now believes the only way to seriously improve it is by painstakingly teaching students the core skills: how to listen for key concepts, how to review your notes and how to organize them to make meaning, teasing out interesting associations between bits of information. (As an example, he points out that students taking notes on the planets will learn lots of individual facts. But if they organize them into a chart, they’ll make discoveries on their own: sort the planets by distance from the sun and speed of rotation, and you’ll discover that the farther you go out, the more slowly they spin.) Kiewra also says that an effective way to get around the problem of incomplete and disorganized note-taking is for teachers to give out “partial” notes — handouts that summarize key concepts in the lecture but leave blanks that the students must fill in, forcing them to pay attention. Some studies have found that students using partial notes capture a majority of the main concepts in a lecture, more than doubling their usual performance.

Indeed, many modern educators say that students shouldn’t be taking notes in class at all. If it’s true that note-taking taxes their working memory, they argue, then teachers should simply hand out complete sets of notes that reflect everything in the lecture — leaving students free to listen and reflect. After all, if the Internet has done anything, it has made it trivially easy for instructors to distribute materials.

“I don’t think anyone should be writing down what the teacher’s saying in class,” is the blunt assessment of Lisa Nielsen, author of a blog, “The Innovative Educator,” who also heads up a division of the New York City Department of Education devoted to finding uses for new digital tools in classrooms. “Teachers should be pulling in YouTube videos or lectures from experts around the world, piping in great people into their classrooms, and all those things can be captured online — on Facebook, on a blog, on a wiki or Web site — for students to be looking at later,” she says. “Now, should students be making meaning of what they’re hearing or coming up with questions? Yes. But they don’t need to write down everything the teacher’s said.” There is some social-science support for the no-note-taking view. In one experiment, Kiewra took several groups of students and subjected them to different note-taking situations: some attended a lecture and reviewed their own notes; others didn’t attend but were given a set of notes from the instructor. Those who heard the lecture and took notes scored 51 percent on a subsequent test, while those who only read the instructor’s notes scored 69 percent.

Of course, if Marggraff has his way, smart pens could become so common — and so much cheaper — that bad notes, or at least incomplete ones, will become a thing of the past. Indeed, if most pen-and-paper writing could be easily copied and swapped online, the impacts on education could be intriguing and widespread. Marggraff intends to release software that lets teachers print their students’ work on dot-patterned paper; students could do their assignment, e-mail it in, then receive a graded paper e-mailed back with handwritten and spoken feedback from the teacher. Students would most likely swap notes more often; perhaps an entire class could designate one really good note-taker and let him write while everyone else listens, sharing the notes online later. Marggraff even foresees textbooks in which students could make notes in the margins and have a permanent digital record of their written and spoken thoughts beside the text. “Now we really have bridged the paper and the digital worlds,” he adds. Perhaps the future of the pen is on the screen.

Clive Thompson, a contributing writer for the magazine, writes frequently about technology and science.