Source: Watson Goes To Medical School
Source: Watson Goes To Medical School
Everyone ought to be able to read and write; few people within the global mainstream would argue with that statement. But should everyone be able to program computers? The question is becoming critically important as digital technology plays an ever more central role in daily life. The movement to make code literacy a basic tenet of education is gaining momentum, and its success or failure will have a huge impact on our society.
The democratization of literacy in the late 19th century created one of the great inflection points in human history. Knowledge was no longer confined to an elite class, and influence began to spread throughout all levels of society. Any educated person could command the power of words.
What if any educated person had equal sway over the power of machines? What if we were to expand our notion of literacy to encompass not only human languages but also machine languages? Could widespread facility in reading and writing code come to be as critical to society as the ability to manipulate spoken and written language?
The usual definition of computer literacy stops at the UI: If a user knows how to make the machine work, he or she is computer-literate. But, of course, the deeper literacy of the programmer is far more powerful. Fortunately, computer languages and human languages are basically very similar. Like human languages, computer languages vary in form and character (Python to Java to Ruby) and can be implemented in infinite ways. My Python may not look like your Python, but it can do the same thing; likewise, a single idea can be expressed using a variety of combinations of English words. And both kinds of language are infinitely flexible. Just as a person literate in English can compose everything from a sonnet to a statute, a person literate in programming languages can automate repetitive tasks, saving time for things only a human can do; distribute access to systems of communication and control to large groups of people; and train machines to do things they’ve never done before. Computer programming already does marvelous things like deliver this article to your mind, operate life-sustaining medical devices and enable IBM’s Watson to win at Jeopardy. The current potential for innovation would be many times greater if every schoolchild had a firm grasp of programming concepts and how to apply them.
Among programmers, a movement is forming around this idea. Shereef Bishay, founder of San Francisco-based Developer Bootcamp, believes that coding is destined to become a new form of widespread literacy within the next 20 to 30 years. Everybody should learn to code, he says, because machine/human and machine/machine interaction is becoming as ubiquitous as human/human interaction. Those who don’t know how to code soon will be in the same position as those who couldn’t read or write 200 years ago.
300 years ago, Bishay said, “you would have to hire to write a letter for you, and hire them to read the letter for you. It is just insane.” Today most people hire a skilled programmer to write computer programs for them.
The code literacy movement began to gather steam in late 2011, when Codecademy started teaching basic programming skills for free. The debate came to a head this week as two blog posts took the top spots on the tech website Hacker News. The first, dubbed “Please Don’t Learn to Code,” came from noted developer and StackOverflow.com creator Jeff Atwood on his blog Coding Horror. The second, a rebuttal entitled “Please Learn to Code,” came from Sacha Greif, a Parisian designer whose clients include HipMunk and MileWise.
“I do think (or at least, hope) that computer programming will become the next version of literacy,” Greif wrote in an email to ReadWriteWeb. “When I watch my 4 year old niece interact with an iPhone, I see her intuitively using interaction patterns that older people often have trouble with, even when they’re computer-literate. And kids can easily memorize huge quantities of facts about complex abstract systems like Pokemon games. So clearly they have the potential to learn how to code.”
Not everyone in the programming community agrees. Atwood argues that verbal literacy is a different kind of skill, and more fundamental. “Literacy is the new literacy,” he told ReadWriteWeb. “As much as I love code, if my fellow programmers could communicate with other human beings one-tenth as well as they communicate with their interpreters and compilers, they’d have vastly more successful careers.”
Atwood stresses learning, and mastering, the basic skills of communication. Learn to read. Learn to write. Learn to hold a conversation. Learn some basic math. These skills, he says, are more essential than being able to program a computer.
Of course, the path to universal code literacy is not without roadblocks. The skills necessary depend on how computing evolves over the next several decades. How will quantum computing affect our relationship with computers? However, the human capacity to learn is not at issue. If it becomes necessary for me to code to interact with my machine, I will likely learn to code. It is no different than if I was dropped off in Cambodia without a place to stay or food to eat – I’d learn the local language posthaste.
At present, the ability to program computers is vocational, like carpentry or learning to cook. There’s little impetus to make it universal. But imagine if it were.
Should computer programming become the new literacy? Or should it remain a vocation? Let us know in the comments.
Images courtesy of Shutterstock
The question among both cloud computing consumers and supercomputer clients alike has been when the distinction between the little cloud and the big iron would disappear. Apparently that boundary evaporated several months ago. In its twice-annual survey of big computer power, the University of Mannheim has reported that Amazon’s EC2 Compute Cluster – the same one you and I can rent space and time on today – performs well enough to be ranked #42 among the world’s Top 500 supercomputers.
How far down is #42? In terms of time, not far at all. When EC2 was but a gleam in Jeff Bezos’ eye, Los Alamos National Laboratory’s BlueGene/L was king. Now, the 212,992-core beast ranks #22. Roadrunner, the amazing hybrid made up of 122,400 of both IBM Power and AMD Opteron cores, sits in #11. Meanwhile, EC2 – whose makeup is a little of this and a little of that – has achieved #42 status with only 17,024 cores.
Twice each year, the rankings of 500 of the world’s supercomputers are assessed by the University of Mannheim in association with Berkeley National Laboratory and the University of Tennessee, Knoxville. Those assessments use the industry standard Linpack benchmark, which measures floating-point performance. Supercomputers’ scores are sorted by tested clusters’ maximal observed peak performance, in gigaflops (GFlops, or billions of floating-point operations per second). This performance is called the “Rmax rating,” and computers are ranked on the Top 500 list according to this score. For comparison, Mannheim does publish theoretical peak performance (“Rpeak“), representing how fast each system’s architects believe it could or should perform. Dividing Rmax by Rpeak produces a yield ranking, which represents how well each system is performing to engineers’ expectations.
EC2′s yield is not particularly great, just a 67.8. By comparison, the winner and still champion in the November Top 2011 list belongs to a machine simply called “K” (shown above), assembled for the RIKEN Advanced Institute for Computational Science in Kobe, Japan. Its yield is an astonishing 93.17. Cloud architectures are not known for their processor efficiency; oftentimes they’re “Frankenstein” machines cobbled together with available parts, but marshaled by a strong, nimble, and adaptive cloud OS.
The Linpack Rmax score for EC2 topped out at 240,090 – almost a quarter of a petaflop. LANL’s Roadrunner was declared to have broken the one petaflop barrier (one thousand trillion floating-point operations per second) in June 2008. Japan’s “K” has now shattered the 10 petaflop barrier with an Rmax score of 10,510,000.
At this rate, EC2 actually may not catch up with the top 20; the rate at which the world’s supercomputers are improving in both speed and efficiency outpaces cloud clusters. What’s interesting about the latest turns of events in the November 2011 rankings is how processors made for supercomputers are outpacing clusters made with commercial off-the-shelf (COTS) processors like Intel Xeon, AMD Opteron, and IBM Power. “K,” for example, is made up of 352,512 dual-core Fujitsu SPARC64 chips, which run two threads in parallel per core, and which include an arithmetic unit separate from the instruction control unit in each core. They’re made for Fujitsu mainframes.
Faster optical interconnects between the chips, as opposed to commercial sockets, also account for huge speed gains. The first leap forward in interconnect technology took place a quarter-century ago, when computer designers began using four-dimensional mapping to link nodes to other nodes. The theoretical shape formed was a “tesseract,” and the theoretical number of feasible connections – a term popularized by the late Dr. Carl Sagan – was “googleplex.” That’s the door from which the word “google” entered our common vernacular.
Fujitsu’s architecture for “K” is based on a theoretical six-dimensional torus, which reduces the hop count for processes between nodes by half or more, and which enables as many as 12 fault-tolerance failovers per node. That’s a feature cloud architects might want to take into account; the failover features that cloud operating systems like OpenStack retrofit into conventional clusters by force, supercomputers implement by design.
Although IBM made headlines last year by demonstrating how Watson could win at “Jeopardy!” the 16 BlueGene architecture machines still on the current list are sliding. The Jülich Supercomputer Center’s JUGENE system that took the lead two years ago has dropped to #13, with only #17, #22, and #23 now in the top 25.
Once shut out of the list entirely, perhaps the most historically significant name in supercomputing history is making a supreme comeback. Cray is now responsible for having constructed 27 of the clusters on the latest list, including the #3 Jaguar at Oak Ridge National Laboratories (Rmax score: 1,759,000) plus #6, #8, #11, #12, #19, and #20. Cray’s surge also represents a boon for AMD, since Cray uses Opteron CPUs exclusively.
The United States still maintains 263 of the Top 500, with Jaguar being the fastest. But China is surging forward too with 74 clusters, Japan with 30, and South Korea with 3, its fastest being a Cray at #31.
For those of you at home still keeping score, Windows is almost out of the picture entirely. It powers only the #58 supercomputer, and that one was built in 2008: a 30,720-core Opteron cluster operated by the Shanghai Supercomputing Center. BSD powers the #94 machine, and Unix powers 30 systems. The rest belong to Linux. Most of those clusters, by the way, use IBM Power processors. Some 49 of the Top 500 run on Power chips, while AMDs power 63 clusters. Intel has the lion’s share with 384 clusters. Among them, 239 use the latest Core architecture, 141 the earlier x86 architecture, and only 4 now are on Itanium.
Source: IBM Eyes Brain-Like Computing