Tech watch

The OLPC project is finally taking their low cost laptop into production later this year , Their low cost laptop for children in third world countries is called XO which is expected to cost around $175 at launch though it is expected that volume production will push the costs down . On the other hand Intel has already started shipping small volumes of its own $285 low cost laptop called the Classmate to India, Vietnam, Pakistan, Thailand, and the Philippines . Intel plans to extend this to Malaysia Indonesia and Sri Lanka by the end of this year under its World ahead program .
Recently in an interview with the Washington Post, Intel VP Sean Maloney discussed the company’s plan for the Classsmate. He reiterates the fact that Classmate is not aimed at bridging the digital divide that exists between the West and the third world and I quote

“It is amazing how many mainstream, made in Taiwan or made in China, notebooks are sold in emerging markets. People have the same aspirations and brand aspirations. You can’t patronize people and say we get the big one with the 14-inch color screen and you get the little one, that’s not going to work. My view is that these things, like Classmate PC, are much better targeted at kids, with a much smaller screen, smaller keyboard. “

Moloney also states that the aim of the low cost classmate PC is to enable the children to access the World Wide web more than anything else . ( read the interview here ) .

Low cost alternatives like XO and Classmate face a lot of problems in realizing their goals . Take for example India where $285 translates to Rs 11,400 which is way beyond what a poor family can afford . Even if any of the state governments decide to bear the cost , even for 100,00 kids its going to cost them Rs 11,400,00000 or $28500000, now that’s is quite a big number . Subsidy is an option but not the definitive answer.
Content delivery is another hurdle , the software and the training material will have to be customized according to the particular needs of the country and that requires a support staff , fixing a conked laptop will require trained manpower which is another overhead especially when you are at such shoe string budgets.

Accomplishing the goals set by the OLPC project or Intel is not going to be easy, especially when there are over 2 billion children who do not have access to computers . Its a very long and winding road ahead

Find out more about OLPC here.

Read about Intel classmate .

Download the Classmate brochure here .


From the recent announcements of the chip majors , it seems the catch up games that they have been playing with each other are over.
What am I talking about ? First it was all about pushing clock speeds up , and more recently it’s been about putting more cores on a chip more efficiently than the other . But as the multicore architectures become more mature , chip majors like Intel and AMD have started to diverge on their multicore strategies . Intel is moving towards homogeneous cores while AMD and IBM look to be adopting a more heterogeneous approach .

Homogeneous as the word implies means all the processing units (cores) on a certain processor are of the same type and divide the workload between them for maximum efficiency . The heterogeneous approach is a little more complicated as it may be implemented in a lot of ways , different size cores for different dedicated functions is one approach . Another approach is where a problem’s workload is split between a general-purpose processor and one or more specialized, problem-specific processors. Heterogeneous computing is a broader research area and the concept has been around for a while now, it also encompasses efforts like GPGPU computing , the Clearspeed accelerators and more recent efforts like the AGEIA PHYSX co processor .The most recent and the best example of a heterogeneous processor would be the STI Cell B.E with its Power PC core and the 8 synergistic processor units . AMD has plans for a similar architecture and their first heterogeneous multicore offering may be a CPU and GPU in the client space.

On the software side ,task-level parallelism and workload partitioning continue to be the dominant issues for multi-core platforms for both heterogeneous and homogeneous architectures. These issues will be more acute on heterogeneous multi-core systems, since the specialized processors will throw up a new set of problems. I believe heterogeneous computing is geared towards extreme performance computing , the GPGPU movement and the Cell’s performance with folding@home proves this point. General purpose computing might go the homogeneous way , as the challenges are far lesser and are fast being resolved .

I happened to stumble upon this at tgdaily . Its called the Optimus Maximus keyboard , and it is developed by a Russian company called Art Lebedev studio. Now for the cool part of the story , every key of the Optimus Maximus keyboard is a stand-alone display showing the function it is currently associated with .

Other disturbing features of this obscenely expensive piece of hardware are as follows, a standalone LED in each of its 113 keys. Each LED has a size of 10.1 x 10.1 mm and offers a resolution of 48 x 48 pixels. apparently the keys can not only display images, but videos with frame rates of up to 10 fps as well. up to 65,536 colors are supported, which can be seen on viewing angles of up to 160 degrees. Image and video layouts are stored on SD cards, which can be inserted on the back of the keyboard.

Hmmm.. really impressive , if you think about it , its quite an achievement but again , how many computer users ever look down at their damn keyboards while typing . I don’t know many , and for $1500 i ll build a kick ass rig complete with disco lights . Instead of this why not spend all the effort in making keyboards more ergonomic or durable , or try to put in features which make it easier for people with disabilities . This goes up right up there with the Finger Nose Hair Trimmer in my list of utterly useless products . Anyone who can give me one good reason to buy the optimus maximus gets a candy. any takers ??

May 21 saw the annual PCI Special Interest Group’s developer’s conference in San Jose , California . It seems that the move to PCI-E 2.0 is going to happen very soon with a lot of major players showing off PCI-E 2.0 technology at the conference . For the uninitiated PCI-E 2.0 has been in development for some time now and aims to double the interconnect bit rate from 2.5 GT/s to 5 GT/s . It effectively increases the aggregate bandwidth of the 16-lane link to approximately 16 GB/s.
Intel who promised to launch a PCI-E 2.0 motherboard before 2008 rolls in demonstrated unreleased AMD and NVidia graphics chips on its Stoakley chip set for workstations which offers two PCI-E 2.0 ports supporting 16 parallel lanes each. Majors like ARM, LSI, NEC and Synopsys also showed off their PCI-E 2.0 technology at the conference .
Intel is expected to release its first chipsets supporting PCIe 2.0 in the second quarter of 2007 with its ‘Bearlake’ family. AMD will start supporting PCIe 2.0 with its RD700 chipset series and NVIDIA with their MCP72 chipset .The PCI SIG is already working to define a version 3.0 for Express that could appear in products in late 2009. it will probably target 8 or 10 G transfers/second.

So come 2008, get ready to embrace express 2.0 as the new standard , also gear up for faster , high performance graphics cards which eat will eat up 300W of power on the Express 2.0 specification . How fast will the transition happen ? , looking at the merciless move from PCI to AGP and then to PCI express , I would say soon … very soon 🙂

Recently the One Laptop Per Child (OLPC) group announced that its low-cost laptop would be raised from $100 up to $175 , Closer to home a Chennai based company has introduced a PC for INR 4500 ( Around $100) and is all set to market it to a potential 10 million customers across the Globe , Net PC as they call it is a network computer, designed on a completely new hardware platform without using any of the typical PC or thin client components. The hardware design instead uses components designed and developed for advanced electronic and digital devices. The devices come as a single or a dual processor solution and can be connected to a basic home TV which serves as the display. For as little as $10 a month the company provides access to all the basic computing functions and also provides a broadband connection . There is no local storage though and all storage is done on a remote server maintained by the company . I won’t get into the technical details of the product , visit the Novatium site for more information ,you can download the product specifications there . I believe this is a great initiative especially for developing countries where a PC is still a luxury and not a necessity . Maybe the OLPC guys can take a few hints from this 🙂

Computing has come a long way , remember the cray-1, it could do one Gigaflop . Recently AMD announced Teraflop in a box : One Opteron processor and two R600 Gpu’s combining to dish out more than 1 trillion floating-point calculations per second using a general “multiply-add” calculation . Intel at their Beijing IDF showed off their concept 80 core teraflop processor . The Cell B.E is a multicore processor of sorts with one PPE and eight SPE’s capable of crunching out 256Gflops . Teraflop scale processing is already happening on the cell with folding@home and the PS3 and according to reports its showing some great results.

With the industry moving towards this parallel processing revolution , the development on the software side of things seems to be remarkably slow , What I mean here is that the software which uses these multicore processors should be optimized for the same , Most of the ISV’s don’t have in-house programming expertize to do multithreading applications.Switching from one core to many core presents its own set of developmental , and debugging challenges and a large percentage of mission-critical enterprise applications are not “multi-core optimized ” leading to applications not showing any kind of performance boost when switched to multicore processors and in some cases showing even poorer performance, this happens because a single-threaded application can’t utilize the additional cores in the processor efficiently without sacrificing ordered processing. This results in a huge drop in performance due to cores being not optimally utilized . Currently there are no automated compilers for multicore processors , then there is the problem of application priority, there are many such issues which need to be sorted out before the transition to multicore can be complete.

Moving further,The Cell B.E is going mainstream slowly with IBM announcing a new line of servers powered by the Cell B.E . Researchers have already released initial details about the EDGE processor architecture , which stands for Explicit Data Graph Execution.Instead of one instruction at a time, EDGE handles large blocks of data all at once. Using many copies of a small number of replicated tiles, the target for TRIPS ( the first prototype chip on the EDGE architecture ) by 2009 is to hit 5 TFLOPs on 32nm manufacturing. Intel is also looking at Larabee to give them similar numbers.
In the future , there is a good chance that we will be moving to some non X-86 ISA processor , which does the job in a better more efficient way.

More on EDGE and TRIPS
A whitepaper on EDGE

nVidia has strapped on 2 Quadro FX5600 GPU’s ( OpenGl version of the mighty 8800GTX) onto thier already impressive line of Quadro plex VCS ( Visual computing system) . Looks like they are getting ready to take on the emerging GPCPU market as the new workstations support GPU computing with nVidia’s CUDA programming.
GPCPU has become a lucrative market for the future and a lot of big names are pouring money into it , recently Rapidmind , a startup grabbed $10 million of VC money for their GPCPU platform, keeping in pace nVidia is pushing its graphics cards hard as a platform for massively multi-threaded processing applications.
With the new GPU’s, the total frame buffer goes up to 3Gb (1.5 Gb per GPU) FSAA (full screen anti-aliasing) goes up to 64X , and as with the 8800GTX , the GPU’s have a unified shader architecture and fully support shader model 4.0 . Now comes my favorite part – performance stats 64x SLI FSAA,16 synchronized output channels,8 HD SDI channels,60 billion pixels/sec fill rate,1 billion triangles/sec geometry performance and is able to handle up to 148 megapixels display walls. and you can have many of these Quadro Plex boxes in your visualization cluster for scalability. Oh ! I forgot to mention they cost $18000 a piece.

Next Page »