Monday, May 5, 2008

VXers slap copyright notices on malware

Malware authors have lifted a page from the legit software industry's rule book and are slapping copyright notices on their Trojans.

One Russian-based outfit has claimed violations of its "licensing agreement" by its underworld customers will result in samples of the knock-off code being sent to anti-virus firms.

The sanction was spotted in the help files of a malware package called Zeus, detected by security firm Symantec as "Infostealer Banker-C". Zeus is offered for sale on the digital underground, and its creators want to protect their revenue stream by making the creation of knock-offs less lucrative.

The copyright notice, a reflection of a lack of trust between virus creators and their customers, is designed to prevent the malware from being freely distributed after its initial purchase. There's no restriction on the number of machines miscreants might use the original malware to infect.

Virus writers are essentially relying on security firms to help them get around the problem that miscreants who buy their code to steal online banking credentials have few scruples about ripping it off and selling it on.

In a blog posting, Symantec security researchers have posted screen shots illustrating the "licensing agreement" for Infostealer Banker-C.

The terms of this licensing agreement demands clients promise not to distribute the code to others, and pay a fee for any update to the product that doesn't involve a bug fix. Reverse engineering of the malware code is also verboten.

"These are typical restrictions that could be applied to any software product, legitimate or not," writes Symantec researcher Liam O'Murchu, adding that the most noteworthy section deals with sanctions for producing knock-off code (translation below).

In cases of violations of the agreement and being detected, the client loses any technical support. Moreover, the binary code of your bot will be immediately sent to antivirus companies.
Despite the warning copies of the malware were traded freely on the digital underground days after its release, Symantec reports. "It just goes to show you just can’t trust anyone in the underground these days," O'Murchu notes

Friday, May 2, 2008

After Long time !!!

Just thought of writing a BASIC program after almost 15 years !!!

AUTO

10 REM --- Hi BASIC ---
20 PRINT "HELLO WORLD! "
30 END

AUTO

10 REM -- I STILL REMEMBER YOU ---
20 PRINT "What is your name"
30 INPUT NAME$
40 PRINT "What is your age"
50 INPUT AGE
60 PRINT "What is date you born. Enter only the date"
70 INPUT DATE
80 PRINT "What is month you born. Enter only the month"
90 INPUT MONTH
100 DIFF = 2008 - AGE
110 NDIFF = DIFF
120 NUM = 1
130 IF MONTH < 5 THEN GOTO 140 ELSE GOTO 210
140 FOR I = DIFF TO 2008 STEP 1
150 PRINT "Your "; NUM ; " birthday was on "; DATE ;"/"; MONTH ;"/"; NDIFF
160 NDIFF = NDIFF + 1
170 NUM = NUM + 1
180 NEXT I
200 GOTO 270
210 IF MONTH > 4 THEN GOTO 220 ELSE GOTO 270
220 FOR I = DIFF TO 2007 STEP 1
230 PRINT "Your "; NUM ; " birthday was on "; DATE ;"/"; MONTH ;"/"; NDIFF
240 NDIFF = NDIFF + 1
250 NUM = NUM + 1
260 NEXT I
270 PRINT "Goodbye BASIC. I will love you forever !!!"
280 REM --- End ---
290 STOP

May 1, 1964: First Basic Program Runs

1964: In the predawn hours of May Day, two professors at Dartmouth College run the first program in their new language, Basic.

Mathematicians John G. Kemeny and Thomas E. Kurtz had been trying to make computing more accessible to their undergraduate students. One problem was that available computing languages like Fortran and Algol were so complex that you really had to be a professional to use them.

So the two professors started writing easy-to-use programming languages in 1956. First came Dartmouth Simplified Code, or Darsimco. Next was the Dartmouth Oversimplified Programming Experiment, or Dope, which was too simple to be of much use. But Kemeny and Kurtz used what they learned to craft the Beginner's All-Purpose Symbolic Instruction Code, or Basic, starting in 1963.

The college's General Electric GE-225 mainframe started running a Basic compiler at 4 a.m. on May 1, 1964. The new language was simple enough to use, and powerful enough to make it desirable. Students weren't the only ones who liked Basic, Kurtz wrote: "It turned out that easy-to-learn-and-use was also a good idea for faculty members, staff members and everyone else."

And it's not just for mainframes. Paul Allen and Bill Gates adapted it for personal computers in 1975, and it's still widely used today to teach programming and as a, well, basic language. (Reacting to the proliferation of complex Basic variants, Kemeny and Kurtz formed a company in the 1980s to develop True BASIC, a lean version that meets ANSI and ISO standards.)

The other problem Kemeny and Kurtz attacked was batch-processing, which made for long waits between the successive runs of a debugging process. Building on work by Fernando Corbató, they completed the Dartmouth Time Sharing System, or DTSS, later in 1964. Like Basic, it revolutionized computing.

Ever the innovator, Kemeny served as president of Dartmouth, 1970-81, introducing coeducation to the school in 1972 after more than two centuries of all-male enrollment.

Monday, April 28, 2008

The Race to Zero

The Race to Zero contest is being held during Defcon 16 at the Riviera Hotel in Las Vegas, 8-10 August 2008.

The event involves contestants being given a sample set of viruses and malcode to modify and upload through the contest portal. The portal passes the modified samples through a number of antivirus engines and determines if the sample is a known threat. The first team or individual to pass their sample past all antivirus engines undetected wins that round. Each round increases in complexity as the contest progresses.



There are a number of key ideas we want to get across by running this event:

1. Reverse engineering and code analysis is fun.

2. Not all antivirus is equal, some products are far easier to circumvent than others. Poorly performing antivirus vendors should be called out.

3. The majority of the signature-based antivirus products can be easily circumvented with a minimal amount of effort.

4. The time taken to modify a piece of known malware to circumvent a good proportion of scanners is disproportionate to the costs of antivirus protection and the losses resulting from the trust placed in it.

5. Signature-based antivirus is dead, people need to look to heuristic, statistical and behaviour based techniques to identify emerging threats

6. Antivirus is just part of the larger picture, you need to look at controlling your endpoint devcies with patching, firewalling and sound security policies to remain virus free.

We are not creating new viruses and modified samples will not be released into the wild, contrary to the belief of some media organisations

Above all we want the contestants to have fun!

Wednesday, April 16, 2008

Not so different

The following are programs written in Ada, C and Java that print to the screen the phrase "Hello World."

ADA PROGRAMMING LANGUAGE

with Ada.Text_IO;
procedure Hello_World is
begin
Ada.Text_IO.Put_Line ("Hello World>br>from Ada");
end Hello_World;



C PROGRAMMING LANGUAGE

#include < stdio.h>

void main()
{
printf("\nHello World\n");
}



JAVA PROGRAMMING LANGUAGE

class helloworldjavaprogram
{
public static void main(String args[])
{
System.out.println("Hello World!");
}
}

The return of ADA

Last fall, contractor Lockheed Martin delivered an update to the Federal Aviation Administration’s next-generation flight data air traffic control system — ahead of schedule and under budget, which is something you don’t often hear about in government circles.

The project, dubbed the En Route Automation Modernization System (ERAM), involved writing more than 1.2 million lines of code and had been labeled by the Government Accountability Office as a high-risk effort. GAO worried that many bugs in the program would appear, which would delay operations and drive up development costs.

Although the project’s success can be attributed to a lot of factors, Jeff O’Leary, an FAA software development and acquisition manager who oversaw ERAM, attributed at least part of it to the use of the Ada programming language.

About half the code in the system is Ada, O’Leary said, and it provided a controlled environment that allowed programmers to develop secure, solid code.

Today, when most people refer to Ada, it’s usually as a cautionary tale. The Defense Department commissioned the programming language in the late 1970s.

The idea was that mandating its use across all the services would stem the proliferation of many programming languages and even a greater number of dialects. Despite the mandate, few programmers used Ada, and the mandate was dropped in 1997. Developers and engineers claimed it was difficult to use.

Military developers stuck with the venerable C programming language they knew well, or they moved to the up-and-coming C++. A few years later, Java took hold, as did Web application languages such as JavaScript.

However, Ada never vanished completely. In fact, in certain communities, notably aviation software, it has remained the programming language of choice.

“It’s interesting that people think that Ada has gone away. In this industry, there is a technology du jour. And people assume things disappear.

But especially in the Defense Department, nothing ever disappears,” said Robert Dewar, president of AdaCore and a professor emeritus of computer science at New York University.

Dewar has been working with Ada since 1980.

Last fall, the faithful gathered at the annual SIGAda 2007 conference in Fairfax, Va., where O’Leary and others spoke about Ada’s promise.

This decades-old language can solve a few of today’s most pressing problems — most notably security and reliability.

“We’re seeing a resurgence of interest,” Dewar said. “I think people are beginning to realize that C++ is not the world’s best choice for critical code.”

Tough requirements

ERAM is the latest component in a multi-decade plan to upgrade the country’s air traffic control system. Not surprisingly, the system had some pretty stringent development requirements, O’Leary said.

The system could never lose data. It had to be fault-tolerant. It had to be easily upgraded. It had to allow for continuous monitoring. Programs had to be able to recover from a crash. And the code that runs the system must “be provably and test-ably free” of errors, O’Leary said.

And such testing should reveal when errors occur and when the correct procedures fail to occur. “If I get packet 218, but not 217, it would request 217 again,” he said.

Ada can offer assistance to programmers with many of these tasks, even if it does require more work on the part of the programmer.

“The thing people have always said about Ada is that it is hard to get a program by the compiler, but once you did, it would always work,” Dewar said. “The compiler is checking a lot of stuff. Unlike a C program, where the C compiler will accept pretty much anything and then you have to fight off the bugs in the debugger, many of the problems in Ada are found by the compiler.”

That stringency causes more work for programmers, but it will also make the code more secure, Ada enthusiasts say.

When DOD commissioned the language in 1977 from the French Bull Co., it required that it have lots of checks to ensure the code did what the programmer intended, and nothing more or less.

For instance, unlike many modern languages and even traditional ones such as C and C++, Ada has a feature called strong typing. This means that for every variable a programmer declares, he or she must also specify a range of all possible inputs. If the range entered is 1- 100, for instance, and the number 102 is entered, then the program won’t accept that data.

This ensures that a malicious hacker can’t enter a long string of characters as part of a buffer overflow attack or that a wrong value won’t later crash the program.

Ada allows developers to prove security properties about programs. For instance, a programmer might want to prove that a variable is not altered while it is being used through the program. Ada is also friendly to static analysis tools. Static analysis looks at the program flow to ensure odd things aren’t taking place — such as making sure the program always calls a certain function with the same number of arguments. “There is nothing in C that stops a program from doing that,” Dewar said. “In Ada, it is impossible.”

Ada was not perfect for the ERAM job, O’Leary said. There are more than a few things that are still needed. One is better analysis tools.

“We’re not exploiting the data” to the full extent that it could be used, he said. The component interfaces could be better. There should also be tools for automatic code generation and better cross-language support.

Nonetheless, many observers believe the basics of Ada are in place for wider use.

Use cases Who uses Ada? Not surprisingly, DOD still uses the language, particularly for command and control systems, Dewar said. About half of AdaCore’s sales are to DOD. AdaCore offers an integrated developer environment called GnatPro, and an Ada compiler.

“There [are] tens of millions of lines of Ada in Defense programs,” Dewar said.

NASA and avionics hardware manufacturers are also heavy users of Ada, he said. Anything mission-critical would be suitable for Ada. For instance, embedded systems in the Boeing 777 and 787 run Ada code.

In all these cases, the component manufacturers are “interested in highly reliable mission- critical programs. And that is the niche that Ada has found its way into,” Dewar said.

In addition to AdaCore, IBM Rational and Green Hills Software offer Ada developer environments.

It also works well as a teaching language. The Air Force Academy found it to be a good language that inexperienced programmers could use to build robust programs. At the SigAda conference, instructor Leemon Baird III showed how a student used Ada to build an artificial- intelligence function for a computer to play a game called Connect4 against human opponents.

“A great part of his success was due to Ada’s features,” Baird said.

Although it was only 2,000 lines, the language allowed the student to write robust code.

“It had to be correct,” he said. The code flowed easily between Solaris and Windows, and could be run across different types of processors with minimal porting.

Programs written in an extension of Ada, called Spark, will be used to run the next generation U.K. ground station air traffic control system, called Interim Future Area Control Tools Support (IFacts).

Praxis, a U.K. systems engineering company, is providing the operating code ---for IFacts. In 2002, England’s busiest airport terminal, London Heathrow Airport, suffered a software-based breakdown of its airplane routing system.

Praxis is under a lot of pressure to ensure its code is free from defects.

Praxis also used Spark for a 2006 National Security Agency-funded project, called the Tokeneer ID Station, said Rod Chapman, an engineer at Praxis. The idea was to create software that would meet the Common Criteria requirements for Evaluation Assurance Level 5, a process long thought to be too challenging for commercial software.

To do this, the software code that was generated had to have a low number of errors. The program itself was access control software.

Someone wishing to gain entry to a secure facility and use a workstation would need the proper smart card and provide a fingerprint.

By using Spark, a static check was made of the software before it was run, to ensure all the possible conditions led to valid outcomes. In more than 9,939 lines of code, no defects were found after the testing and remediation process was completed.

Although the original language leaned heavily toward strong typing and provability, subsequent iterations have kept Ada modernized, Dewar said. Ada 95 added object-oriented programming capabilities, and Ada 2005 tamped down on security requirements even further. The language has also been ratified as a standard by the American National Standards Institute and by the International Organization of Standards (ISO/IEC 8652).

Ada was named for Augusta Ada King, Countess of Lovelace, daughter of Lord Byron.

In the early 19th century, she published what is considered by most to be the world’s first computer program, to be run on a prototype of a computer designed by Charles Babbage, called the Analytical Engine. But don’t let the language’s historical legacy fool you — it might be just the thing to answer tomorrow’s security and reliability challenges.

Monday, April 14, 2008

Fedora 9 Sulphur - Release date

Tools to access Linux Partitions from Windows

If you dual boot with Windows and Linux, and have data spread across different partitions on Linux and Windows, you should be really in for some issues.

It happens sometimes you need to access your files on Linux partitions from Windows, and you realize it isn’t possible easily. Not really, with these tools in hand - it’s very easy for you to access files on your Linux partitions from Windows

Explore2fs

Explore2fs is a GUI explorer tool for accessing ext2 and ext3 filesystems. It runs under all versions of Windows and can read almost any ext2 and ext3 filesystem.

Project Home Page :- http://www.chrysocome.net/explore2fs

Friday, April 4, 2008

C++ Historical Sources Archive

Abstract
This is a collection of design documents, source code, and other materials concerning the birth, development, standardization, and use of the C++ programming language.



1979 April
Work on C with Classes began
1979 October
First C with Classes (Cpre) running
1983 August
First C++ in use at Bell Labs
1984
C++ named
1985 February
Cfront Release E (first external C++ release)
1985 October
Cfront Release 1.0 (first commercial release)

The C++ Programming Language
1986
First commercial Cfront PC port (Cfront 1.1, Glockenspiel)
1987 February
Cfront Release 1.2
1987 December
First GNU C++ release (1.13)
1988
First Oregon Software C++ release [announcement]; first Zortech C++ release
1989 June
Cfront Release 2.0
1989
The Annotated C++ Reference Manual; ANSI C++ committee (J16) founded (Washington, DC)
1990
First ANSI X3J16 technical meeting (Somerset, NJ) [see group photograph, courtesy of Andrew Koenig]; templates accepted (Seattle, WA); exceptions accepted (Palo Alto, CA); first Borland C++ release
1991
First ISO WG21 meeting (Lund, Sweden); Cfront Release 3.0 (including templates); The C++ Programming Language (2nd edition)
1992
First IBM, DEC, and Microsoft C++ releases
1993
Run-time type identification accepted (Portland, Oregon); namespaces and string (templatized by character type) accepted (Munich, Germany); A History of C++: 1979-1991 published at HOPL2
1994
string (templatized by character type) (San Diego, California); the STL accepted (San Diego, CA and Waterloo, Canada)
1996
export accepted (Stockholm, Sweden)
1997
Final committee vote on the complete standard (Morristown, New Jersey)
1998
ISO C++ standard ratified
2003
Technical Corrigendum; work on C++0x started
2004
Performance technical report; Library technical report (hash tables, regular expressions, smart pointers, etc.)
2005
First votes on features for C++0x (Lillehammer, Norway); auto, static_assert, and rvalue references accepted in principle
2006
First full committee (official) votes on features for C++0x (Berlin, Germany)

Programmers At Work, 22 Years Later

In 1986, the book Programmers at Work presented interviews with 19 programmers and software designers from the early days of personal computing including Charles Simonyi, Andy Hertzfeld, Ray Ozzie, Bill Gates, and Pac Man programmer Toru Iwatani. Leonard Richardson tracked down these pioneers and has compiled a nice summary of where they are now, 22 years later.

Where Are They Now?

Charles Simonyi. Then, Microsoft programmer. Now: super-rich guy, space tourist, endowing Oxford chairs and whatnot. Works at Intentional Software.

Butler Lampson. Then, PARC dude. Now: a Microsoft Fellow.

John Warnock. Then: co-founder of Adobe. Now: retired, serves on boards of directors, apparently runs a bed and breakfast.

Gary Kildall: Then: author of CP/M. Died in 1994. The project he was working on in Programmers at Work became the first encyclopedia distributed on CD-ROM. He also hosted Computer Chronicles for a while.

Bill Gates. Then: founder of Microsoft, popularizer of the word "super". Now: richest guy in the
world. After a stint in the 90s as pure evil, semi-retired to focus on philanthropic work.

John Page. Then: co-founder of the Software Publishing Company, makers of PFS:FILE, an early database program. Now: I'm not really sure. Here's a video of him from 2006, so he's probably still alive, but he's not on the web. SPC was acquired in 1996. Through some odd corporate synergy the public face of the business now appears to be Harvard Graphics.

C. Wayne Ratliff. Then: author of dBase. Now: retired.

Dan Bricklin. Then: co-author of VisiCalc. Now: Has a weblog and lots of accessible historical information about his projects. Still runs Software Garden. Still looks almost exactly like his illustration in PaW, leading some to speculate on a "Spreadsheet of Dorian Gray" type effect. I secretly hope he will see this in referer logs and invite me to hang out with him.

Bob Frankston. Then: the other half of VisiCalc. Now: worked for Microsoft for a few years, now retired, has a weblog.

Jonathan Sachs. Then: co-author of Lotus 1-2-3. Now: semi-retired. Gives away Pocket PC software from his home page, and sells photography software as Digital Light & Color. More details in this 2004 oral history.

Ray Ozzie. Then: Lotus Symphony dude, left Lotus to write what would eventually be sold as Lotus Notes. Now: Chief Software Architect at Microsoft, after working for IBM and starting Groove Networks. Has a weblog, but hasn't posted for about a year.

Peter Roizen. Then: author of T/Maker, a spreadsheet program. Now: programmer consultant. Inventor of a Scrabble variant that uses shell glob syntax.

Bob Carr. Then: PARC Alum, Chief Scientist at Ashton-Tate, author of Framework integrated suite. Now: founder of Keep and Share. In between: co-founded Go, worked for Autodesk. Doesn't seem to have a web presence.

Jef Raskin. Then: Macintosh project creator, founder of Information Appliance. Died in 2005. His excellent web site is still up. Author of well-respected book The Humane Interface. The project he's working on in PaW, the SwyftCard, was a minor success.

Andy Hertzfeld. Then: Macintosh OS developer. Now: works at the OSAF Google and hosts a bunch of websites, including folklore.org and Susan Kare's site. (Incidentally, Susan Kare now works for Chumby.) In between: worked at General Magic and Eazel, which probably only people who read this weblog remember.
Most of the people profiled in PaW provide some sample of their programming or thought process. Hertzfeld has the best one: an assembler program that makes Susan Kare's Macintosh icons bounce around a window.

Toru Iwatani. Then: designer of Pac-Man. Now: retired from Namco in 2007. Visiting professor at a Japanese university (the University of Arts in Osaka or Tokyo Polytechnic, depending on which source you believe). In PaW very proud of a game called Libble Rabble, which I'd never heard of. I believe PaW interview was for a while the only English-language information available about Iwatani.
Significantly, in a recent interview Iwatani refused to comment on Ms. Pac-Man's relationship to Pac-Man. Possibly because Ms. Pac-Man is actually Pac-Man's transgendered clone, and Namco doesn't want word getting out.

Scott Kim. The only person mentioned in PaW I've met. Then: basically a puzzle designer. Now: still a puzzle designer. His website. Also has an interest in math education.

Jaron Lanier. Then: working on a visual programming/simulation language. Blows Susan Lammers's mind with a description of virtual reality (see also "Virtual World" in Future Stuff). Now: scholar in residence at Berkeley, occasional columnist for Discover. Lots of stuff on his website. Here's video of a game he wrote.

Report: boot sector viruses and rootkits poised for comeback

Security firm Panda Labs has released (PDF) its malware report for the first quarter 2008. The report covers a number of topics and makes predictions about the types of attacks we may see in the future. Forecasting these trends is always tricky—no one expected the Storm Worm to explode when it did—but Panda's prediction that we may see a rise in boot sector viruses is rather surprising. We'll touch on malware first, however, and return to this topic shortly.

Thus far, adware, trojans, and miscellaneous "other" malware including dialers, viruses, and hacking tools have captured the lion's share of the "market" as it were. These three categories account for 80.55 percent of the malware Panda Labs detected over the first quarter.

Password-stealing trojans are still a growing market, and the report cautions users, as always, to be careful of their banking records... and their World of WarCraft/Lineage II passwords. It might be interesting to take a poll of hardcore World of WarCraft players and see which of these two categories they care more about protecting, but the results would likely make a parent weep. One can always make more money, after all, but raiding Sunwell Plateau is serious business.

From here, Panda Labs trots through familiar territory. The monetization of the malware market, the prevalence of JavaScript/IFrame attack vectors, and the growing number of prepackaged virus-building kits are all issues that the report raises. We've covered all of these before, but if you've not been paying attention and want to catch up on general malware trends, the report is a good place to do it. Also, just in case you missed it, social engineering-based attacks are both dangerous and effective, and social networks, particularly those based around Web 2.0, are often tempting attack targets.



Panda's report does raise a new concern, though it comes from a surprising direction. According to the company, boot sector viruses loaded with rootkits are poised to make a comeback. This honestly sounds a bit odd, considering how long it has been since a boot virus has topped the malware charts, but it's at least theoretically possible. Such viruses have a simple method of operation. The virus copies itself into the Master Boot Record (MBR) of a hard drive, and rewrites the actual MBR data in a different section of the drive.

Once a rootkit is loaded into the MBR, it can use its position to obfuscate its own activity. This is obviously rather handy when attempting to hide from rootkit-detection software, and could cause a new set of headaches for antivirus software if the threat actually materializes. Panda Lab's report does a good job of explaining what a boot virus is and how it can infect a system, but it says virtually nothing about why such attack vectors are a concern today.

The problem with boot viruses is that their attack vector is fairly well-guarded. Any antivirus program worth beans will detect a suspicious attempt to modify the MBR and will alert the end user accordingly. Running as a user rather than an administrator should also prevent such modification even if you don't have an antivirus scanner installed. Panda implies that this kind of exploit could be an issue in Linux, and I suppose that's theoretically possible, but Linux always creates a user account without root access by default.

Windows Vista, for its part, recommends that you run in user mode, even though the OS doesn't require it. Even in admin mode, a virus can't just get away with this type of modification, and UAC would pick up and flag any attempt to overwrite the MBR. Even if none of these barriers existed, there's still the issue of BIOS-enabled boot sector protection, which exists entirely to prevent this type of attack from occurring. If you want to catch a boot sector virus, in other words, you'll have to work at it.

Aside from the company's surprising conclusion regarding boot viruses, Panda Lab's report paints the picture of illegal businesses doing business as usual. In a way, this is actually a good thing. AV companies currently have their collective hands full dealing with the number of variants that are still spinning off the attacks and infections from last year, and the last thing the industry needs is for the Son of Storm to make an appearance.

An Interview with Bjarne Stroustrup

C++ creator Bjarne Stroustrup discusses the evolving C++0x standard, the education of programmers, and the future of programming.



JB: When did you first become interested in computing, what was your first computer and what was the first program you wrote?

BS: I learned programming in my second year of university. I was signed up to do "mathematics with computers science" from the start, but I don't really remember why. I suspect that I (erroneously) thought that computing was some sort of applied math.

My fist computer was the departmental GIER computer. It was almost exclusively programmed in Algol-60. My first semi-real program plotted lines (on paper!) between points on the edge of a superellipse to create pleasant graphical designs. That was in 1970.

JB: When you created C++, was the object oriented programming (OOP) paradigm (or programming style) obviously going to gain a lot of popularity in the future, or was it a research project to find out if OOP would catch on?

BS: Neither! My firm impression (at the time) was that all sensible people "knew" that OOP didn't work in the real world: It was too slow (by more than an order of magnitude), far too difficult for programmers to use, didn't apply to real-world problems, and couldn't interact with all the rest of the code needed in a system. Actually, I'm probably being too optimistic here: "sensible people" had never heard of OOP and didn't want to hear about it.

I designed and implemented C++ because I had some problems for which it was the right solution: I needed C-style access to hardware and Simula-style program organization. It turned out that many of my colleagues had similar needs. Actually, then it was not even obvious that C would succeed. At the time, C was gaining a following, but many people still considered serious systems programming in anything but assembler adventurous and there were several languages that—like C—provided a way of writing portable systems programs. One of those others might have become dominant instead of C.

JB: Before C++, did you "just have to create C++" because of the inadequacy of other languages, for example? In essence, why did you create C++?

BS: Yes, I created C++ in response to a real need: The languages at the time didn't support abstraction for hard systems programming tasks in the way I needed it. I was trying to separate the functions of the Unix kernel so that they could run on different processors of a multi-processor or a cluster.

JB: Personally, do you think OOP is the best programming paradigm for large scale software systems, as opposed to literate programming, functional programming, procedural programming, etc.? Why?

BS: No programming paradigm is best for everything. What you have is a problem and a solution to it; then, you try to map that solution into code for execution. You do that with resource constraints and concerns for maintainability. Sometimes, that mapping is best done with OOP, sometimes with generic programming, sometimes with functional programming, etc.

OOP is appropriate where you can organize some key concepts into a hierarchy and manipulate the resulting classes through common base classes. Please note that I equate OO with the traditional use of encapsulation, inheritance, and (run time) polymorphism. You can choose alternative definitions, but this one is well-founded in history.

I don't think that literate programming is a paradigm like the others you mention. It is more of a development method like test-driven development.

C++0x

JB: In your paper, "The Design of C++0x" published in the May 2005 issue of the C/C++ User's Journal, you note that "C++'s emphasis on general features (notably classes) has been its main strength." In that paper you also mention the most change and new features will be in the Standard Library. A lot of people would like to see regular expressions, threads and the like, for example. Could you give us an idea of new classes or facilities that we can expect to see in C++0x's Standard Library?

BS: The progress on standard libraries has not been what I hoped for. We will get regular expressions, hash tables, threads, many improvements to the existing containers and algorithms, and a few more minor facilities. We will not get the networking library, the date and time library, or the file system library. These will wait until a second library TR. I had hoped for much more, but the committee has so few resources and absolutely no funding for library development.

JB: Have you or others working on C++0x had a lot of genuinely good ideas for new classes or facilities? If so, will all of them be used or will some have to be left out because of time and other constraints on developing a new standard? If that is the case, what would most likely be left out?

BS: There is no shortage of good ideas in the committee or of good libraries in the wider C++ community. There are, however, severe limits to what a group of volunteers working without funding can do. What I expect to miss most will be thread pools and the file system library. However, please note that the work will proceed beyond '09 and that many libraries are already available; for example see what boost.org has to offer.

JB: When would you expect the C++0x Standard to be published?

BS:The standard will be finished in late 2008, but it takes forever to go through all the hoops of the ISO process. So, we must face the reality that "C++0x" may become C++10.

JB: Concurrent programming is obviously going to become important in the future, because of multi-core processors and kernels that get better at distributing processes among them. Do you expect C++0x will address this, and if so, how?

BS: The new memory model and a task library was voted into C++0x in Kona. That provides a firm basis for share-memory multiprocessing as is essential for multicores. Unfortunately, it does not address higher-level models for concurrency such as thread pools and futures, shared memory parallel programming, or distributed memory parallel processing. Thread pools and futures are scheduled for something that's likely to be C++13. Shared memory can be had using Intel's Threading Building Blocks and distributed memory parallel processing is addresses by STAPL from Texas A&M University and other research systems. The important thing here is that given the well-defined and portable platform provided by the C++0x memory model and threads, many higher-level models can be provided.

Distributed programming isn't addressed, but there is a networking library scheduled for a technical report. That library is already in serious commercial use and its public domain implementation is available from boost.org.

Educating the Next Generation

JB: Do you think programmers should be "armed and dangerous" with their tools like compilers, editors, debuggers, linkers, version control systems and so on very early on in their learning or careers? Do you think that universities should teach debugging and how to program in a certain environment (e.g. Unix) as well, for example?

BS: I'm not 100% sure I understand the question, but I think "yes." I don't think that it should be possible to graduate with a computer science, computer engineering, etc. degree without having used the basic tools you mention above for a couple of major projects. However, it is possible. In some famous universities, I have observed that Computer Science is the only degree you can get without writing a program.

I'm not arguing for mere training. The use of tools must not be a substitute for a good understanding of the principles of programming. Someone who knows all the power-tools well, but not the fundamental principles of software development would indeed be "armed and dangerous." Let me point to algorithms, data structures, and machine architecture as essentials. I'd also like to see a good understanding of operating systems and networking.

Some educators will point out that all of that—together with ever-popular and useful topics such as graphics and security—doesn't fit into a four-year program for most students. Something will have to give! I agree, but I think what should give is the idea that four years is enough to produce a well-rounded software developer: Let's aim to make a five-or-six-year masters the first degree considered sufficient.

JB: What should C++ programmers or any programmer do, in your view, before sitting down to write a substantial software program?

BS: Think. Discuss with colleagues and potential users. Get a good first-order understanding of the problem domain. If possible, try to be a user of an existing system in that field. Then, without too much further agonizing, try to build a simplified system to try out the fundamental ideas of a design. That "simplified system" might become a throwaway experiment or it may become the nucleus of a complete system. I'm a great fan of the idea of "growing" a system from simpler, less complete, but working and tested systems. To try out all the tool chains before making too grand plans.

How would the programmer, designer, team get those "fundamental ideas of a design"? Experience, knowledge of similar systems, of tools, and of libraries is a major part of the answer. The idea of a single developer carefully planning to write a system from the bare programming language has realistically been a myth for decades. David Wheeler wrote the first paper about how to design libraries in 1951—56 years ago!

JB: What type of programs do you personally enjoy writing? What programs have you written recently?

BS: These days I don't get enough time to write code, but I think writing libraries is the most fun. I wrote a small library supporting N-dimensional arrays with the usual mathematical operations. I have also been playing with regular expressions using the (draft) C++0x library (the boost.org version).

I also write a lot of little programs to test aspects of the language, but that's more work than fun, and also small programs to explore application domains that I haven't tried or haven't tried lately. It is a rare week that I don't write some code.

JB: In your experience, have there been any features of C++ that newcomers to the language have had the most difficulty with? Would you have any advice for newcomers to C++ or have you found a way of teaching difficult features of C++ saving students a lot of trial and error?

BS: Some trial and error is inevitable, and may even be good for the newcomer, but yes I have some experience introducing C++ to individuals and organizations—some of it successful. I don't think it's the features that are hard to learn, it is the understanding of the programming paradigms that cause trouble. I'm continuously amazed at how novices (of all backgrounds and experiences) come to C++ with fully formed and firm ideas of how the language should be used. For example, some come convinced that any techniques not easily used in C is inherently wrongheaded, hard to use, and very inefficient. It's amazing what people are willing to firmly assert without measurements and often based on briefly looking at C++ a decade ago using a compiler that was hardly out of beta—or simply based on other people's assertions without checking if they have a basis in reality. Conversely, there is now a generation who is firmly convinced that a program is only well-designed if just about everything is part of a class hierarchy and just about every decision is delayed to run-time. Obviously programs written by these "true OO" programmers become the best arguments for the "stick to C" programmers. Eventually, a "true OO" programmer will find that C++ has many features that don't serve their needs and that they indeed fail to gain that fabled efficiency.

To use C++ well, you have to use a mix of techniques; to learn C++ without undue pain and unnecessary effort, you must see how the language features serve the programming styles (the programming paradigms). Try to see concepts of an application as classes. It's not really hard when you don't worry too much about class hierarchies, templates, etc. until you have to.

Learn to use the language features to solve simple programs at first. That might sound trivial, but I receive many questions from people who have "studied C++" for a couple of weeks and are seriously confused by examples of multiple inheritance using names like B1, B2, D, D2, f, and mf. They are—usually without knowing it—trying to become language lawyers rather than programmers. I don't need multiple inheritance all that often, and certainly not the tricky parts. It is far more important to get a feel for writing a good constructor to establish an invariant and understand when to define a destructor and copy operations. My point is that the latter is almost instantly useful and not difficult (I can teach it to someone who has never programmed after a month). The finer details of inheritance and templates, on the other hand, are almost impenetrable until you have a real-world program that needs it. In The Art of Computer Programming Don Knuth apologizes for not giving good examples of co-routines, because their advantages are not obvious in small programs. Many C++ features are like that in that: They don't make sense until you face a problem of the kind and scale that needs that feature.

JB: Do you have any suggestions for people who are not programmers and want to learn how to program, and want to learn C++ as their first language? For instance, there's a book called Accelerated C++: Practical Programming by Example by Andrew Koenig and Barbara E. Moo. This book's approach is to teach by using the STL and advanced features at an early stage, like using strings, vectors and so on, with the aim of writing "real" programs faster. Would you agree that it's best to begin "the C++ way" if you could call it that, instead of starting off with a strictly procedural style and leaving classes, the Standard Library and other features often preceded with "an introduction to OOP" much later in a book?

BS: I have had to consider this question "for real" and have had the opportunity to observe the effects of my theories. I designed and repeatedly taught a freshman programming course at Texas A&M University. I use standard library features, such as string, vector, and sort, from the first week. I don't emphasize STL; I just use the facilities to have better types with which to introduce the usual control structures and programming techniques. I emphasize correctness and error handling from day 1. I show how to build a couple of simple types in lecture 6 (week three). I show much of the mechanisms for defining classes in lectures 8 and 9 together with the ways of using them. By lecture 10 and 11, I have the students using iostreams on files. By then, they are tired, but can read a file of structured temperature data and extract information from it. They can do it 6 weeks after seeing their first line of code. I emphasize the use of classes as a way of structuring code to make it easier to get right.

After that comes graphics, including some use of class hierarchies, and then comes the STL. Yes, you can do that with complete beginners in a semester. We have by now done that for more than 1,000 students. The reason for putting the STL after graphics is purely pedagogical: after iostreams the students are thoroughly tired of "calculations and CS stuff," but doing graphics is a treat! The fact that they need the basics of OOP to do that graphics is a minor detail. They can now graph the data read and fill class objects from a GUI interface.

JB: I read an interview in Texas A&M Engineering Magazine where you said, "I decided to design a first programming course after seeing how many computer science students—including students from top schools—lacked fundamental skills needed to design and implement quality software..." What were these fundamental skills students lacked, and what did you put in your programming course to address this issue?

BS: I saw so many students who simply didn't have the notion that code itself is a topic of interest and that well-structured code is a major time saver. The notion of organizing code to be sure that it is correct and maybe even for someone else to use and modify is alien: They see code as simply something needed to hand in the answers to an exercise. I am of course not talking about all students or just students from one university or from one country.

In my course I heavily emphasize structure, correctness, and define the purpose of the course as "becoming able to produce code good enough for the use of others." I use data structures and algorithms, but the topic of the course is "programming" not fiddling with pointers to implement a doubly linked list.

And yes, I do teach pointers, arrays, and casts, but not until well after strings, vectors, and class design. You need to give the students a feel of the machine as well as the mechanisms to make the (correct) use of the machine simple. This also reflects comments I have repeatedly had from industry: that they have a shortage of developers who understand "the machine" and "systems."

JB: You have said that a programmer must be able to think clearly, understand questions and express solutions. This is in agreement with G. Polya's thesis that you must have a clear and complete understanding of a question before you can ever hope to solve it. Would you recommend supplementary general reading like G. Polya's book, How to Solve It along with reading books on programming and technique? If so, what books would you recommend?

BS: I avoid teaching "how to think." I suspect that's best taught through lots of good examples. So I give lots of good examples (to set a standard) including examples of gradual development of a program from early imperfect versions. I'm not saying anything against Polya's ideas, but I don't have the room for it in my approach. The problem with designing a course (or a curriculum) is more what to leave out than what to add.

JB: What are the most useful mathematical skills, generally, that a programmer should have an understanding of if they intend to become professional, in your view? Or would there be different mathematical skills a programmer should know for different programmers and different tasks? If this is the case could you give examples?

BS: I don't know. I think of math as a splendid way to learn to think straight. Exactly what math to learn and exactly where what kinds of math can be applied is secondary to me.

The Future of C++

JB: Your research group is looking into parallel and distributed systems. From this research, have any new ideas for the new C++0x standard come about?

BS: Not yet. The gap between a research result and a tool that can be part of an international standard is enormous. Together with Gabriel Dos Reis at TAMU, I have worked on the representation of C++ aiming at better program analysis and transformation (eventually to be applied to distributed system code). That will become important some day. A couple of my grad students analyzed the old problem of multi-methods (calling a function selected based on two or more dynamic types) and found a solution that can be integrated into C++ and performs better in time and space than any workaround. Together with Michael Gibbs from Lockheed-Martin, I developed a fast, constant-time, dynamic cast implementation. This work points to a future beyond C++0x.

JB: You have some thoughts on how programming can be improved, generally. What are they?

BS: There is immense scope for improvement. A better education is a start. I think that theory and practice have become dissociated in many cases, with predictably poor results. However, we should not fool ourselves into seeing education and/or training as a panacea. There are on the order of 10 million programmers "out there" and little agreement on how to improve education. In the early days of C++, I worried a lot about "not being able to teach teachers fast enough." I had reason to worry because much of the obvious poor use of C++ can be traced to fundamental misunderstandings among educators. I obviously failed to articulate my ideals and principles sufficiently. Given that the problems are not restricted to C++, I'm not alone in that. As far as I can see, every large programming community suffers, so the problem is one of scale.

"Better programming languages" is one popular answer, but with a new language you start by spending the better part of a decade to rebuild existing infrastructure and community, and the advance comes at the cost of existing languages: At least some of the energy and resources spent for the new language would have been spent on improving old ones.

There are so many other areas where significant improvements are possible: IDEs, libraries, design methods, domain specific languages, etc. However, to succeed, we must not lose sight of programming. We must remember that the aim is to produce working, correct, and well-performing code. That may sound trite, but I am continuously shocked over how little code is discussed and presented at some conferences that claim to be concerned with software development.

JB: An area of interest for you is multi-paradigm (or multiple-style) programming. Could you explain what this is for you, what you have been doing with multi-paradigm programming lately and do you have any examples of the usefulness of multi-paradigm programming?

BS: Almost all that I do in C++ is "multi-paradigm." I really have to find a better name for that, but consider: I invariably use containers (preferably standard library (STL) containers); they are parameterized on types and supported by algorithms. That's generic programming. On the other hand, I can't remember when I last wrote a significant program without some class hierarchies. That's object-oriented programming. Put pointers to base classes in containers and you have a mixture of GP and OOP that is simpler, less error prone, more flexible, and more efficient than what could be done exclusively in GP or exclusively in OOP. I also tend to use a lot of little free-standing types, such as Color and Point. That's basic data abstraction and such types are of course used in class hierarchies and in containers.

JB: You are looking at making C++ better for systems programming in C++0x as I understand it, is that correct? If so, what general facilities or features are you thinking about for making C++ a great systems language?

BS: Correct. The most direct answer involves features that directly support systems programming, such as thread local storage and atomic types. However, the more significant part is improvements to the abstraction mechanism that will allow cleaner code for higher-level operations and better organization of code without time or space overheads compared to low-level code. A simple example of that is generalized constant expressions that allows compile-time evaluate of functions and guaranteed compile-time initialization of memory. An obvious use of that is to put objects in ROM. C++0x also offers a fair number of "small features" that without adding to run-time costs make the language better for writing demanding code, such as static assertions, rvalue references (for more efficient argument passing) and improved enumerations.

Finally, C++0x will provide better support for generic programming and since generic programming using templates has become key to high performance libraries, this will help systems programming also. For example, here we find concepts (a type system for types and combinations of types) to improve static checking of template code.

JB: You feel strongly about better education for software developers. Would you say, generally, that education in computer programming is appalling? Or so-so? If you were to design a course for high school students and a course (an entire degree) for university students intending to become professional, what would you include in these courses and what would you emphasize?

BS: Actually, I just took part in an effort to do that for the four undergraduate years. Unfortunately, the descriptions you find of our program on the web is still a mix of new and old stuff—real-world programs can only be put in place in stages. The idea is to give the students a broad view of computer science during the first two years ("making them ready for their first internship or project") and then using the next two years to go into depth in some selected areas. During the first two years, the students get a fairly classical CS program with a slightly higher component of software development projects than is common. They have courses in hardware and software (using C++), there is some discrete math, algorithms and data structures, (operating and network) systems, programming languages, and a "programming studio" exposing them to group projects and some project management.

JB: In an ideal world for you, what will C++0x be in terms of all the goodies in the new Standard Library and language?

BS: Unfortunately, we don't live in an ideal world and C++0x won't get all the "goodies" I'd like and probably a fewer "minor" features than I would have liked. Fortunately, the committee has decided to try for more and smaller increments. For example, C++0x (whether that'll be C++09 or C++10) will have only the preparations for programmer-controlled garbage collection and lightweight concurrency, whereas we hope for the full-blown facilities in C++12 (or C++13).

I do—based on existing work and votes—expect to get:

* Libraries
o Threads
o Regular expressions
o Hash tables
o Smart pointers
o Many improvements for containers
o Quite a bit support for new libraries
* Language
o A memory model supporting modern machine architectures
o Thread local storage
o Atomic types
o Rvalue references
o Static assertions
o Template aliases
o Variadic templates
o Strongly typed enums
o constexpr: Generalized constant expressions
o Control of alignment
o Delegating constructors
o Inheriting constructors
o auto: Deducing variable types from initializers
o Control of defaults
o nullptr: A name for the null pointer
o initializer lists and uniform initialization syntax and semantics
o concepts (a type system for template arguments)
o a range-based for loop
o raw string literals
o UTF8 literals
o Lambda functions

For a more detailed description of both my ideals, the work of the ISO C++ standards committee, and C++0x, see my HOPL-iii paper: "Evolving a language in and for the real world: C++ 1991-2006. ACM HOPL-III" from earlier this year. Also, see the ISO C++ committee's web site where you can find more details than you could possibly want (search for "WG21" and look for "papers").

BS: C++ is often used in embedded systems, including those where safety and security are top priorities. What are your favorite examples and why do you think C++ is an ideal language for embedded systems especially where safety is a concern, aside from easy low-level machine access?

BS: Yes, and I find many of those applications quite exciting. Too often people think of computing as "what runs on a PC with a single user in front of it." Obviously, my Bell Labs background biases me towards noticing the uses of software in cell phones, telecommunications devices, and systems in general. So much of our infrastructure is invisible and taken for granted! "The gadgets" we can see. That's one reason I like embedded systems programming. Another is the stringent demands on correctness (even with some hardware malfunction) and performance. In such software, there is a need for clear design and precise expression of ideas that can challenge a language.

Among my favorite examples are the modern wind-power generators and the huge diesel engines that power the largest container ships. We can't talk about invisibility here, but then people don't see those huge "structures" as containing computers running software that is critical to their correct, efficient, and economical performance. I have also seen some interesting uses of C++ in aerospace, notably the new "Joint Strike Fighter" (the F-35), but my favorite is the higher levels of the Mars Rover software (scene analysis and autonomous driving). The whole Rover project is really a stunning success. Both Rovers have outlived their promised design life by a factor of 6 and are (as I write this) still working their way across Mars looking, prodding, and sending home data. Again, the Rovers themselves are just the visible part of a huge complex system: just try to imagine what it takes to get the data back to earth and analyzed. Almost all of computer science and almost all of our engineering skills are involved here somewhere. The range of skilled people involved is really hard to imagine. Too often we forget the people.

I don't think there exists an "ideal language" for these kinds of systems, but C++ is a good one. Part of the reason is that any large system, such as a cell phone or a Rover depends on its huge hidden infrastructure. Obviously, you don't have to use a single language for everything, but there are enough overlaping parts of applications for C++'s flexibility, generality, and concern for performance to come into play. Many languages deemed simpler achieve their simplicity through limiting their range of applications or by making great demands on the underlying hardware and (software) execution environments.

JB: Would you say that the document JSF++: Joint Strike Fighter Air Vehicle Coding Standards that can be found under your C++ links, under the point "For a look at how ISO C++ can be used for serious embedded systems programming" is generally a good guide for any embedded systems programming, and perhaps for other things as well?

BS: Yes, that's a good guide for the kind of applications for which it was written. For those, I'm convinced that it's the best of its kind; I helped write it. It is very important to note, though, that with experience, we will find improvements and if applied to areas for which it was not intended, it could do harm. For example, the JSF++ prohibits the use of free store (dynamic store) and exceptions, whereas for general programming you typically need to use those features to get the best code. However, for high-reliability, hard-real time code, the simple fact that a new and a throw can't (in general) be guaranteed to execute on a short, fixed, constant time makes them a no-no. I also recommend the ISO C++ committee's Technical Report on performance, which can also be found on my C++ page. It is primarily aimed at embedded systems programming, but discusses issues rather than lay out rules. Please also note that the JSF++ document is about half rationale: people should not be asked to do things "just because we say so." At least we can try to explain the reasons behind the rules.

In general, I think it essential that a coding standard is tuned to a specific application, organization, or application area. Trying to "legislate morality" for all users is counter-productive.

JB: What do you enjoy doing in your spare time? Have you read any books or watched any films lately that you liked, and if yes, why?

Spare time? I like to run and to sightsee when I have a chance (usually when traveling for something work-related). Taking photographs is an excellent excuse for spending a bit of extra time in interesting places. Spending time with family is a high priority, and a pleasure, of course. I read non-technical books essentially every day. That's mostly light reading to relax. I recently re-read some of Raymond Chandler's novels; they age well. I also just finished Terry Jones and Alan Ereira's "Barbarians" about how the Romans really were the destructive villains of the ancient world; that's a refreshingly different perspective. I have always been fascinated by history. There are so many parts of history that nobody would have believed it if it wasn't real—after all fiction has to be probable. And then, of course, you can't read just one book about history; you need to read a lot to understand the context of events and to avoid being sold a biased fairy tale version of something. Before that, I read Richard Dawkin's "Climbing Mount Improbable"; I basically have to re-learn biology because just about everything is new since I left school. Curiously, I'm often asked about my non-technical reading habits, so I posted a short list among my home pages.

Learn as Many Languages as You Can

Nick Plante recently reminded me in his blog of the advice in Pragmatic Programmer to learn one programming language a year.

I say why wait? If the Sapir-Whorf hypothesis has any bearing on computer programming (which I believe it does) then you can give your programming skills a big boost by cramming a bunch of languages in your head now.

However, you need to leave your safety zone and learn a properly disjoint set of languages so that you can effectively expand your thinking, and deepen your general understanding of programming.

Assuming like many of us you are of a predominantly C++/Java background then I recommend learning the following languages in roughly the following order, and as quickly as possible.

Ruby - Ruby does a great job of showing how powerful a dynamic language can be, and leverages powerful ideas from Smalltalk, Perl, and Lisp.

Scheme - Scheme is a dialect of Lisp with some pretty hardcore implementations. Make sure you can wrap your head around call/cc and be sure to learn what "lambda" does and what the different "let bindings" are for.

PostScript - PostScript is a neat way to experience the power of stack-based programming. It looks like a toy but it isn't, millions of printers around the world run it all the time.

Prolog - Prolog can make solving a large class of programming problems a snap to solve. It is also easy to implement in your language of choice.

ML - ML is one of the favourite languages used by computer scientists. I suggest learning algebraic data types (sum types and product types) then to move on quickly to Haskell.

Haskell - I find Haskell makes the most sense only after knowing Scheme and ML. Go crazy with pattern matching, but avoid using monads unless absolutely neccessary because they are cheating! You will be sorely tempted to resort to using them all over the place.

Erlang - See how easy distributed programming can be.
Getting some experience in this set of languages as soon as possible, would really catapult you forward to a new level of programming. You will gain new insights into solving programming problems in whatever language you happen to be using. It will also make transitioning between languages a snap.

Of course, not everyone has the time to or energy to learn a whole set of new langauges. So if you have to choose only one new language for the time being then my recommendation is Scala. Scala is very accessible to programmers from different backgrounds. Scala provides access to type inference and advanced techniques used in languages like Haskell, but still supports common Java idioms and dynamic programming.

The Scala by Example [pdf] online book for example was heavily influenced by the the famous Scheme book: Structure and Interpretation of Computer Programs. The fun thing about Scala, is that you can slowly introduce yourself to new concepts and still be an effective programming using a programming style that you are more accustomed to. The more time you spend with Scala, the more you realize that you can do with it, and it can take you quite far.

For the highly motivated, I have compiled a list of URLs at http://www.plre.org/languages.html of roughly a hundred programming languages if you want to survey the landscape of programming languages more thoroughly.

How Apple Got Everything Right By Doing Everything Wrong



One Infinite Loop, Apple's street address, is a programming in-joke — it refers to a routine that never ends. But it is also an apt description of the travails of parking at the Cupertino, California, campus. Like most things in Silicon Valley, Apple's lots are egalitarian; there are no reserved spots for managers or higher-ups. Even if you're a Porsche-driving senior executive, if you arrive after 10 am, you should be prepared to circle the lot endlessly, hunting for a space.

But there is one Mercedes that doesn't need to search for very long, and it belongs to Steve Jobs. If there's no easy-to-find spot and he's in a hurry, Jobs has been known to pull up to Apple's front entrance and park in a handicapped space. (Sometimes he takes up two spaces.) It's become a piece of Apple lore — and a running gag at the company. Employees have stuck notes under his windshield wiper: "Park Different." They have also converted the minimalist wheelchair symbol on the pavement into a Mercedes logo.

Jobs' fabled attitude toward parking reflects his approach to business: For him, the regular rules do not apply. Everybody is familiar with Google's famous catchphrase, "Don't be evil." It has become a shorthand mission statement for Silicon Valley, encompassing a variety of ideals that — proponents say — are good for business and good for the world: Embrace open platforms. Trust decisions to the wisdom of crowds. Treat your employees like gods.

It's ironic, then, that one of the Valley's most successful companies ignored all of these tenets. Google and Apple may have a friendly relationship — Google CEO Eric Schmidt sits on Apple's board, after all — but by Google's definition, Apple is irredeemably evil, behaving more like an old-fashioned industrial titan than a different-thinking business of the future. Apple operates with a level of secrecy that makes Thomas Pynchon look like Paris Hilton. It locks consumers into a proprietary ecosystem. And as for treating employees like gods? Yeah, Apple doesn't do that either.

But by deliberately flouting the Google mantra, Apple has thrived. When Jobs retook the helm in 1997, the company was struggling to survive. Today it has a market cap of $105 billion, placing it ahead of Dell and behind Intel. Its iPod commands 70 percent of the MP3 player market. Four billion songs have been purchased from iTunes. The iPhone is reshaping the entire wireless industry. Even the underdog Mac operating system has begun to nibble into Windows' once-unassailable dominance; last year, its share of the US market topped 6 percent, more than double its portion in 2003.

It's hard to see how any of this would have happened had Jobs hewed to the standard touchy-feely philosophies of Silicon Valley. Apple creates must-have products the old-fashioned way: by locking the doors and sweating and bleeding until something emerges perfectly formed. It's hard to see the Mac OS and the iPhone coming out of the same design-by-committee process that produced Microsoft Vista or Dell's Pocket DJ music player. Likewise, had Apple opened its iTunes-iPod juggernaut to outside developers, the company would have risked turning its uniquely integrated service into a hodgepodge of independent applications — kind of like the rest of the Internet, come to think of it.

And now observers, academics, and even some other companies are taking notes. Because while Apple's tactics may seem like Industrial Revolution relics, they've helped the company position itself ahead of its competitors and at the forefront of the tech industry. Sometimes, evil works.

Over the past 100 years, management theory has followed a smooth trajectory, from enslavement to empowerment. The 20th century began with Taylorism — engineer Frederick Winslow Taylor's notion that workers are interchangeable cogs — but with every decade came a new philosophy, each advocating that more power be passed down the chain of command to division managers, group leaders, and workers themselves. In 1977, Robert Greenleaf's Servant Leadership argued that CEOs should think of themselves as slaves to their workers and focus on keeping them happy.

Silicon Valley has always been at the forefront of this kind of egalitarianism. In the 1940s, Bill Hewlett and David Packard pioneered what business author Tom Peters dubbed "managing by walking around," an approach that encouraged executives to communicate informally with their employees. In the 1990s, Intel's executives expressed solidarity with the engineers by renouncing their swanky corner offices in favor of standard-issue cubicles. And today, if Google hasn't made itself a Greenleaf-esque slave to its employees, it's at least a cruise director: The Mountain View campus is famous for its perks, including in-house masseuses, roller-hockey games, and a cafeteria where employees gobble gourmet vittles for free. What's more, Google's engineers have unprecedented autonomy; they choose which projects they work on and whom they work with. And they are encouraged to allot 20 percent of their work week to pursuing their own software ideas. The result? Products like Gmail and Google News, which began as personal endeavors.

Jobs, by contrast, is a notorious micromanager. No product escapes Cupertino without meeting Jobs' exacting standards, which are said to cover such esoteric details as the number of screws on the bottom of a laptop and the curve of a monitor's corners. "He would scrutinize everything, down to the pixel level," says Cordell Ratzlaff, a former manager charged with creating the OS X interface.

At most companies, the red-faced, tyrannical boss is an outdated archetype, a caricature from the life of Dagwood. Not at Apple. Whereas the rest of the tech industry may motivate employees with carrots, Jobs is known as an inveterate stick man. Even the most favored employee could find themselves on the receiving end of a tirade. Insiders have a term for it: the "hero-shithead roller coaster." Says Edward Eigerman, a former Apple engineer, "More than anywhere else I've worked before or since, there's a lot of concern about being fired."

But Jobs' employees remain devoted. That's because his autocracy is balanced by his famous charisma — he can make the task of designing a power supply feel like a mission from God. Andy Hertzfeld, lead designer of the original Macintosh OS, says Jobs imbued him and his coworkers with "messianic zeal." And because Jobs' approval is so hard to win, Apple staffers labor tirelessly to please him. "He has the ability to pull the best out of people," says Ratzlaff, who worked closely with Jobs on OS X for 18 months. "I learned a tremendous amount from him."

Apple's successes in the years since Jobs' return — iMac, iPod, iPhone — suggest an alternate vision to the worker-is-always-right school of management. In Cupertino, innovation doesn't come from coddling employees and collecting whatever froth rises to the surface; it is the product of an intense, hard-fought process, where people's feelings are irrelevant. Some management theorists are coming around to Apple's way of thinking. "A certain type of forcefulness and perseverance is sometimes helpful when tackling large, intractable problems," says Roderick Kramer, a social psychologist at Stanford who wrote an appreciation of "great intimidators" — including Jobs — for the February 2006 Harvard Business Review.

Likewise, Robert Sutton's 2007 book, The No Asshole Rule, spoke out against workplace tyrants but made an exception for Jobs: "He inspires astounding effort and creativity from his people," Sutton wrote. A Silicon Valley insider once told Sutton that he had seen Jobs demean many people and make some of them cry. But, the insider added, "He was almost always right."

"Steve proves that it's OK to be an asshole," says Guy Kawasaki, Apple's former chief evangelist. "I can't relate to the way he does things, but it's not his problem. It's mine. He just has a different OS."

Nicholas Ciarelli created Think Secret — a Web site devoted to exposing Apple's covert product plans — when he was 13 years old, a seventh grader at Cazenovia Junior-Senior High School in central New York. He stuck with it for 10 years, publishing some legitimate scoops (he predicted the introduction of a new titanium PowerBook, the iPod shuffle, and the Mac mini) and some embarrassing misfires (he reported that the iPod mini would sell for $100; it actually went for $249) for a growing audience of Apple enthusiasts. When he left for Harvard, Ciarelli kept the site up and continued to pull in ad revenue. At heart, though, Think Secret wasn't a financial enterprise but a personal obsession. "I was a huge enthusiast," Ciarelli says. "One of my birthday cakes had an Apple logo on it."

Most companies would pay millions of dollars for that kind of attention — an army of fans so eager to buy your stuff that they can't wait for official announcements to learn about the newest products. But not Apple. Over the course of his run, Ciarelli received dozens of cease-and-desist letters from the object of his affection, charging him with everything from copyright infringement to disclosing trade secrets. In January 2005, Apple filed a lawsuit against Ciarelli, accusing him of illegally soliciting trade secrets from its employees. Two years later, in December 2007, Ciarelli settled with Apple, shutting down his site two months later. (He and Apple agreed to keep the settlement terms confidential.)

Apple's secrecy may not seem out of place in Silicon Valley, land of the nondisclosure agreement, where algorithms are protected with the same zeal as missile launch codes. But in recent years, the tech industry has come to embrace candor. Microsoft — once the epitome of the faceless megalith — has softened its public image by encouraging employees to create no-holds-barred blogs, which share details of upcoming projects and even criticize the company. Sun Microsystems CEO Jonathan Schwartz has used his widely read blog to announce layoffs, explain strategy, and defend acquisitions.

"Openness facilitates a genuine conversation, and often collaboration, toward a shared outcome," says Steve Rubel, a senior vice president at the PR firm Edeleman Digital. "When people feel like they're on your side, it increases their trust in you. And trust drives sales."

In an April 2007 cover story, we at Wired dubbed this tactic "radical transparency." But Apple takes a different approach to its public relations. Call it radical opacity. Apple's relationship with the press is dismissive at best, adversarial at worst; Jobs himself speaks only to a handpicked batch of reporters, and only when he deems it necessary. (He declined to talk to Wired for this article.) Forget corporate blogs — Apple doesn't seem to like anyone blogging about the company. And Apple appears to revel in obfuscation. For years, Jobs dismissed the idea of adding video capability to the iPod. "We want it to make toast," he quipped sarcastically at a 2004 press conference. "We're toying with refrigeration, too." A year later, he unveiled the fifth-generation iPod, complete with video. Jobs similarly disavowed the suggestion that he might move the Mac to Intel chips or release a software developers' kit for the iPhone — only months before announcing his intentions to do just that.

Even Apple employees often have no idea what their own company is up to. Workers' electronic security badges are programmed to restrict access to various areas of the campus. (Signs warning NO TAILGATING are posted on doors to discourage the curious from sneaking into off-limit areas.) Software and hardware designers are housed in separate buildings and kept from seeing each other's work, so neither gets a complete sense of the project. "We have cells, like a terrorist organization," Jon Rubinstein, former head of Apple's hardware and iPod divisions and now executive chair at Palm, told BusinessWeek in 2000.

At times, Apple's secrecy approaches paranoia. Talking to outsiders is forbidden; employees are warned against telling their families what they are working on. (Phil Schiller, Apple's marketing chief, once told Fortune magazine he couldn't share the release date of a new iPod with his own son.) Even Jobs is subject to his own strictures. He took home a prototype of Apple's boom box, the iPod Hi-Fi, but kept it concealed under a cloth.

But Apple's radical opacity hasn't hurt the company — rather, the approach has been critical to its success, allowing the company to attack new product categories and grab market share before competitors wake up. It took Apple nearly three years to develop the iPhone in secret; that was a three-year head start on rivals. Likewise, while there are dozens of iPod knockoffs, they have hit the market just as Apple has rendered them obsolete. For example, Microsoft introduced the Zune 2, with its iPod-like touch-sensitive scroll wheel, in October 2007, a month after Apple announced it was moving toward a new interface for the iPod touch. Apple has been known to poke fun at its rivals' catch-up strategies. The company announced Tiger, the latest version of its operating system, with posters taunting, REDMOND, START YOUR PHOTOCOPIERS.)

Secrecy has also served Apple's marketing efforts well, building up feverish anticipation for every announcement. In the weeks before Macworld Expo, Apple's annual trade show, the tech media is filled with predictions about what product Jobs will unveil in his keynote address. Consumer-tech Web sites liveblog the speech as it happens, generating their biggest traffic of the year. And the next day, practically every media outlet covers the announcements. Harvard business professor David Yoffie has said that the introduction of the iPhone resulted in headlines worth $400 million in advertising.

But Jobs' tactics also carry risks — especially when his announcements don't live up to the lofty expectations that come with such secrecy. The MacBook Air received a mixed response after some fans — who were hoping for a touchscreen-enabled tablet PC — deemed the slim-but-pricey subnotebook insufficiently revolutionary. Fans have a nickname for the aftermath of a disappointing event: post-Macworld depression.

Still, Apple's radical opacity has, on the whole, been a rousing success — and it's a tactic that most competitors can't mimic. Intel and Microsoft, for instance, sell their chips and software through partnerships with PC companies; they publish product road maps months in advance so their partners can create the machines to use them. Console makers like Sony and Microsoft work hand in hand with developers so they can announce a full roster of games when their PlayStations and Xboxes launch. But because Apple creates all of the hardware and software in-house, it can keep those products under wraps. Fundamentally the company bears more resemblance to an old-school industrial manufacturer like General Motors than to the typical tech firm.

In fact, part of the joy of being an Apple customer is anticipating the surprises that Santa Steve brings at Macworld Expo every January. Ciarelli is still eager to find out what's coming next — even if he can't write about it. "I wish they hadn't sued me," he says, "but I'm still a fan of their products."

Back in the mid-1990s, as Apple struggled to increase its share of the PC market, every analyst with a Bloomberg terminal was quick to diagnose the cause of the computermaker's failure: Apple waited too long to license its operating system to outside hardware makers. In other words, it tried for too long to control the entire computing experience. Microsoft, Apple's rival to the north, dominated by encouraging computer manufacturers to build their offerings around its software. Sure, that strategy could result in an inferior user experience and lots of cut-rate Wintel machines, but it also gave Microsoft a stranglehold on the software market. Even Wired joined the fray; in June 1997, we told Apple, "You shoulda licensed your OS in 1987" and advised, "Admit it. You're out of the hardware game."

Oops.

When Jobs returned to Apple in 1997, he ignored everyone's advice and tied his company's proprietary software to its proprietary hardware. He has held to that strategy over the years, even as his Silicon Valley cohorts have embraced the values of openness and interoperability. Android, Google's operating system for mobile phones, is designed to work on any participating handset. Last year, Amazon.com began selling DRM-free songs that can be played on any MP3 player. Even Microsoft has begun to embrace the movement toward Web-based applications, software that runs on any platform.

Not Apple. Want to hear your iTunes songs on the go? You're locked into playing them on your iPod. Want to run OS X? Buy a Mac. Want to play movies from your iPod on your TV? You've got to buy a special Apple-branded connector ($49). Only one wireless carrier would give Jobs free rein to design software and features for his handset, which is why anyone who wants an iPhone must sign up for service with AT&T.

During the early days of the PC, the entire computer industry was like Apple — companies such as Osborne and Amiga built software that worked only on their own machines. Now Apple is the one vertically integrated company left, a fact that makes Jobs proud. "Apple is the last company in our industry that creates the whole widget," he once told a Macworld crowd.

But not everyone sees Apple's all-or-nothing approach in such benign terms. The music and film industries, in particular, worry that Jobs has become a gatekeeper for all digital content. Doug Morris, CEO of Universal Music, has accused iTunes of leaving labels powerless to negotiate with it. (Ironically, it was the labels themselves that insisted on the DRM that confines iTunes purchases to the iPod, and that they now protest.) "Apple has destroyed the music business," NBC Universal chief Jeff Zucker told an audience at Syracuse University. "If we don't take control on the video side, [they'll] do the same." At a media business conference held during the early days of the Hollywood writers' strike, Michael Eisner argued that Apple was the union's real enemy: "[The studios] make deals with Steve Jobs, who takes them to the cleaners. They make all these kinds of things, and who's making money? Apple!"

Meanwhile, Jobs' insistence on the sanctity of his machines has affronted some of his biggest fans. In September, Apple released its first upgrade to the iPhone operating system. But the new software had a pernicious side effect: It would brick, or disable, any phone containing unapproved applications. The blogosphere erupted in protest; gadget blog Gizmodo even wrote a new review of the iPhone, reranking it a "don't buy." Last year, Jobs announced he would open up the iPhone so that independent developers could create applications for it, but only through an official process that gives Apple final approval of every application.

For all the protests, consumers don't seem to mind Apple's walled garden. In fact, they're clamoring to get in. Yes, the iPod hardware and the iTunes software are inextricably linked — that's why they work so well together. And now, PC-based iPod users, impressed with the experience, have started converting to Macs, further investing themselves in the Apple ecosystem.

Some Apple competitors have tried to emulate its tactics. Microsoft's MP3 strategy used to be like its mobile strategy — license its software to (almost) all comers. Not any more: The operating system for Microsoft's Zune player is designed uniquely for the device, mimicking the iPod's vertical integration. Amazon's Kindle e-reader provides seamless access to a proprietary selection of downloadable books, much as the iTunes Music Store provides direct access to an Apple-curated storefront. And the Nintendo Wii, the Sony PlayStation 3, and the Xbox360 each offer users access to self-contained online marketplaces for downloading games and special features.

Tim O'Reilly, publisher of the O'Reilly Radar blog and an organizer of the Web 2.0 Summit, says that these "three-tiered systems" — that blend hardware, installed software, and proprietary Web applications — represent the future of the Net. As consumers increasingly access the Web using scaled-down appliances like mobile phones and Kindle readers, they will demand applications that are tailored to work with those devices. True, such systems could theoretically be open, with any developer allowed to throw its own applications and services into the mix. But for now, the best three-tier systems are closed. And Apple, O'Reilly says, is the only company that "really understands how to build apps for a three-tiered system."

If Apple represents the shiny, happy future of the tech industry, it also looks a lot like our cat-o'-nine-tails past. In part, that's because the tech business itself more and more resembles an old-line consumer industry. When hardware and software makers were focused on winning business clients, price and interoperability were more important than the user experience. But now that consumers make up the most profitable market segment, usability and design have become priorities. Customers expect a reliable and intuitive experience — just like they do with any other consumer product.

All this plays to Steve Jobs' strengths. No other company has proven as adept at giving customers what they want before they know they want it. Undoubtedly, this is due to Jobs' unique creative vision. But it's also a function of his management practices. By exerting unrelenting control over his employees, his image, and even his customers, Jobs exerts unrelenting control over his products and how they're used. And in a consumer-focused tech industry, the products are what matter. "Everything that's happening is playing to his values," says Geoffrey Moore, author of the marketing tome Crossing the Chasm. "He's at the absolute epicenter of the digitization of life. He's totally in the zone."

Leander Kahney (leander@wired.com), news editor of Wired.com, is the author of Inside Steve's Brain, to be published in April by Penguin Portfolio.

Encryption could make you more vulnerable, warn experts

The use of data encryption could make organizations vulnerable to new risks and threats, a panel of security experts warned Monday.

Many organizations are encrypting their stored data to relieve concerns over data theft or loss - for example, U.S. mandatory disclosure laws on data breaches do not apply to encrypted data.

However, experts from IBM Internet Security Systems, Juniper, nCipher and elsewhere said that data encryption also brings new risks, in particular via attacks - deliberate or accidental - on the key management infrastructure.

The change comes particularly with the shift from encrypting data in transit to encrypting stored data - often in response to regulatory demands - said Richard Moulds, nCipher's product strategy EVP.

"Lot of organizations are new to encryption," he added. "Their only exposure to it has been with SSL, but that's just a session. When you shift to data at rest and encrypt your laptop, if you lose the key you trash your data - it's a self-inflicted denial-of-service attack.

"Organizations experienced with encryption are standing back and saying this is potentially a nightmare. It is potentially bringing your business to a grinding halt."

Encryption is also as big an interest for the bad guys as the good guys, warned Anton Grashion, European security strategist for Juniper. "As soon as you let the cat out of the bag, they'll be using it too," he said. "For example, it looks like a great opportunity to start attacking key infrastructures."

"It's a new class of DoS attack," agreed Moulds. "If you can go in and revoke a key and then demand a ransom, it's a fantastic way of attacking a business."

Another risk is that over-zealous use of encryption will damage an organization's ability to legitimately share and use critical business data, noted Joshua Corman, principal security strategist for IBM ISS.

"One fear I have is that we're all going to hide all our information, but companies are information-driven, so we take tactical decision and stifle ability to collaborate," he said.

"Sometimes, the result of implementing security technology is actually a net increase in risk," added Richard Reiner, chief security and technology officer at Telus Security Solutions.

Boeing's New 787 May Be Vulnerable to Hacker Attack



Boeing's new 787 Dreamliner passenger jet may have a serious security vulnerability in its onboard computer networks that could allow passengers to access the plane's control systems, according to the U.S. Federal Aviation Administration.

The computer network in the Dreamliner's passenger compartment, designed to give passengers in-flight internet access, is connected to the plane's control, navigation and communication systems, an FAA report reveals.

The revelation is causing concern in security circles because the physical connection of the networks makes the plane's control systems vulnerable to hackers. A more secure design would physically separate the two computer networks. Boeing said it's aware of the issue and has designed a solution it will test shortly.

"This is serious," said Mark Loveless, a network security analyst with Autonomic Networks, a company in stealth mode, who presented a conference talk last year on Hacking the Friendly Skies (PowerPoint). "This isn’t a desktop computer. It's controlling the systems that are keeping people from plunging to their deaths. So I hope they are really thinking about how to get this right."

Currently in the final stages of production, the 787 Dreamliner is Boeing's new mid-sized jet, which will seat between 210 and 330 passengers, depending on configuration.

Boeing says it has taken more than 800 advance orders for the new plane, which is due to enter service in November 2008. But the FAA is requiring Boeing to demonstrate that it has addressed the computer-network issue before the planes begin service.

According to the FAA document published in the Federal Register (mirrored at Cryptome.org), the vulnerability exists because the plane's computer systems connect the passenger network with the flight-safety, control and navigation network. It also connects to the airline's business and administrative-support network, which communicates maintenance issues to ground crews.

The design "allows new kinds of passenger connectivity to previously isolated data networks connected to systems that perform functions required for the safe operation of the airplane," says the FAA document. "Because of this new passenger connectivity, the proposed data-network design and integration may result in security vulnerabilities from intentional or unintentional corruption of data and systems critical to the safety and maintenance of the airplane."

The information is published in a "special conditions" document that the FAA produces when it encounters new aircraft designs and technologies that aren't addressed by existing regulations and standards.

An FAA spokesman said he would not be able to comment on the issue until next week.

Boeing spokeswoman Lori Gunter said the wording of the FAA document is misleading, and that the plane's networks don't completely connect.

Gunter wouldn't go into detail about how Boeing is tackling the issue but says it is employing a combination of solutions that involves some physical separation of the networks, known as "air gaps," and software firewalls. Gunter also mentioned other technical solutions, which she said are proprietary and didn't want to discuss in public.

"There are places where the networks are not touching, and there are places where they are," she said.

Gunter added that although data can pass between the networks, "there are protections in place" to ensure that the passenger internet service doesn't access the maintenance data or the navigation system "under any circumstance."

She said the safeguards protect the critical networks from unauthorized access, but the company still needs to conduct lab and in-flight testing to ensure that they work. This will occur in March when the first Dreamliner is ready for a test flight.

Gunter said Boeing has been working on the issue with the FAA for a number of years already and was aware that the agency was planning to publish a "special conditions" document regarding the Dreamliner.

Gunter said the FAA and Boeing have already agreed on the tests that the plane manufacturer will have to do to demonstrate that it has addressed the FAA's security concerns.

"It will all be done before the first airplane is delivered," she said.

Loveless said he's glad the FAA and Boeing are addressing the issue, but without knowing specifically what Boeing is doing, it is impossible to say whether the proposed solution will work as intended. Loveless said software firewalls offer some protection, but are not bulletproof, and he noted that the FAA has previously overlooked serious onboard-security issues.

"The fact that they are not sharing information about it is a concern," he said. "I'd be happier if a credible auditing firm took a look at it."

Special conditions are not unusual. The FAA publishes them whenever it encounters unusual issues regarding a plane's design or performance in order to communicate on record that it expects the manufacturer to address the issue. It's then up to the manufacturer to demonstrate to the FAA that it has solved the problem. Gunter said the FAA has issued eight special conditions on the Boeing 787, but that not all of them pertain to the plane's computer systems.

History of Linux


a. In The Beginning

It was 1991, and the ruthless agonies of the cold war were gradually coming to an end. There was an air of peace and tranquility that prevailed in the horizon. In the field of computing, a great future seemed to be in the offing, as powerful hardware pushed the limits of the computers beyond what anyone expected.

But still, something was missing.

And it was the none other than the Operating Systems, where a great void seemed to have appeared.

For one thing, DOS was still reigning supreme in its vast empire of personal computers. Bought by Bill Gates from a Seattle hacker for $50,000, the bare bones operating system had sneaked into every corner of the world by virtue of a clever marketing strategy. PC users had no other choice. Apple Macs were better, but with astronomical prices that nobody could afford, they remained a horizon away from the eager millions.

The other dedicated camp of computing was the Unixworld. But Unix itself was far more expensive. In quest of big money, the Unix vendors priced it high enough to ensure small PC users stayed away from it. The source code of Unix, once taught in universities courtesy of Bell Labs, was now cautiously guarded and not published publicly. To add to the frustration of PC users worldwide, the big players in the software market failed to provide an efficient solution to this problem.

A solution seemed to appear in form of MINIX. It was written from scratch by Andrew S. Tanenbaum, a US-born Dutch professor who wanted to teach his students the inner workings of a real operating system. It was designed to run on the Intel 8086 microprocessors that had flooded the world market.

As an operating system, MINIX was not a superb one. But it had the advantage that the source code was available. Anyone who happened to get the book 'Operating Systems: Design and Implementation' by Tanenbaum could get hold of the 12,000 lines of code, written in C and assembly language. For the first time, an aspiring programmer or hacker could read the source codes of the operating system, which to that time the software vendors had guarded vigorously. A superb author, Tanenbaum captivated the brightest minds of computer science with the elaborate and immaculately lively discussion of the art of creating a working operating system. Students of Computer Science all over the world pored over the book, reading through the codes to understand the very system that runs their computer.

And one of them was Linus Torvalds.