I happened to stumble upon this at tgdaily . Its called the Optimus Maximus keyboard , and it is developed by a Russian company called Art Lebedev studio. Now for the cool part of the story , every key of the Optimus Maximus keyboard is a stand-alone display showing the function it is currently associated with .

Other disturbing features of this obscenely expensive piece of hardware are as follows, a standalone LED in each of its 113 keys. Each LED has a size of 10.1 x 10.1 mm and offers a resolution of 48 x 48 pixels. apparently the keys can not only display images, but videos with frame rates of up to 10 fps as well. up to 65,536 colors are supported, which can be seen on viewing angles of up to 160 degrees. Image and video layouts are stored on SD cards, which can be inserted on the back of the keyboard.

Hmmm.. really impressive , if you think about it , its quite an achievement but again , how many computer users ever look down at their damn keyboards while typing . I don’t know many , and for $1500 i ll build a kick ass rig complete with disco lights . Instead of this why not spend all the effort in making keyboards more ergonomic or durable , or try to put in features which make it easier for people with disabilities . This goes up right up there with the Finger Nose Hair Trimmer in my list of utterly useless products . Anyone who can give me one good reason to buy the optimus maximus gets a candy. any takers ??

May 21 saw the annual PCI Special Interest Group’s developer’s conference in San Jose , California . It seems that the move to PCI-E 2.0 is going to happen very soon with a lot of major players showing off PCI-E 2.0 technology at the conference . For the uninitiated PCI-E 2.0 has been in development for some time now and aims to double the interconnect bit rate from 2.5 GT/s to 5 GT/s . It effectively increases the aggregate bandwidth of the 16-lane link to approximately 16 GB/s.
Intel who promised to launch a PCI-E 2.0 motherboard before 2008 rolls in demonstrated unreleased AMD and NVidia graphics chips on its Stoakley chip set for workstations which offers two PCI-E 2.0 ports supporting 16 parallel lanes each. Majors like ARM, LSI, NEC and Synopsys also showed off their PCI-E 2.0 technology at the conference .
Intel is expected to release its first chipsets supporting PCIe 2.0 in the second quarter of 2007 with its ‘Bearlake’ family. AMD will start supporting PCIe 2.0 with its RD700 chipset series and NVIDIA with their MCP72 chipset .The PCI SIG is already working to define a version 3.0 for Express that could appear in products in late 2009. it will probably target 8 or 10 G transfers/second.

So come 2008, get ready to embrace express 2.0 as the new standard , also gear up for faster , high performance graphics cards which eat will eat up 300W of power on the Express 2.0 specification . How fast will the transition happen ? , looking at the merciless move from PCI to AGP and then to PCI express , I would say soon … very soon 🙂

I am back , after an amazing trip to Goa and a few trips to the hospital owing to a horrendous bout of viral fever . I really needed this break to get away from my boring daily schedule and get some time for myself . I guess it worked cause my energy levels are on an all time high. So its back to the usual now , at least for the next two months cause then I move on to do my masters.

Recently the One Laptop Per Child (OLPC) group announced that its low-cost laptop would be raised from $100 up to $175 , Closer to home a Chennai based company has introduced a PC for INR 4500 ( Around $100) and is all set to market it to a potential 10 million customers across the Globe , Net PC as they call it is a network computer, designed on a completely new hardware platform without using any of the typical PC or thin client components. The hardware design instead uses components designed and developed for advanced electronic and digital devices. The devices come as a single or a dual processor solution and can be connected to a basic home TV which serves as the display. For as little as $10 a month the company provides access to all the basic computing functions and also provides a broadband connection . There is no local storage though and all storage is done on a remote server maintained by the company . I won’t get into the technical details of the product , visit the Novatium site for more information ,you can download the product specifications there . I believe this is a great initiative especially for developing countries where a PC is still a luxury and not a necessity . Maybe the OLPC guys can take a few hints from this 🙂


Computing has come a long way , remember the cray-1, it could do one Gigaflop . Recently AMD announced Teraflop in a box : One Opteron processor and two R600 Gpu’s combining to dish out more than 1 trillion floating-point calculations per second using a general “multiply-add” calculation . Intel at their Beijing IDF showed off their concept 80 core teraflop processor . The Cell B.E is a multicore processor of sorts with one PPE and eight SPE’s capable of crunching out 256Gflops . Teraflop scale processing is already happening on the cell with folding@home and the PS3 and according to reports its showing some great results.

With the industry moving towards this parallel processing revolution , the development on the software side of things seems to be remarkably slow , What I mean here is that the software which uses these multicore processors should be optimized for the same , Most of the ISV’s don’t have in-house programming expertize to do multithreading applications.Switching from one core to many core presents its own set of developmental , and debugging challenges and a large percentage of mission-critical enterprise applications are not “multi-core optimized ” leading to applications not showing any kind of performance boost when switched to multicore processors and in some cases showing even poorer performance, this happens because a single-threaded application can’t utilize the additional cores in the processor efficiently without sacrificing ordered processing. This results in a huge drop in performance due to cores being not optimally utilized . Currently there are no automated compilers for multicore processors , then there is the problem of application priority, there are many such issues which need to be sorted out before the transition to multicore can be complete.

Moving further,The Cell B.E is going mainstream slowly with IBM announcing a new line of servers powered by the Cell B.E . Researchers have already released initial details about the EDGE processor architecture , which stands for Explicit Data Graph Execution.Instead of one instruction at a time, EDGE handles large blocks of data all at once. Using many copies of a small number of replicated tiles, the target for TRIPS ( the first prototype chip on the EDGE architecture ) by 2009 is to hit 5 TFLOPs on 32nm manufacturing. Intel is also looking at Larabee to give them similar numbers.
In the future , there is a good chance that we will be moving to some non X-86 ISA processor , which does the job in a better more efficient way.

More on EDGE and TRIPS
A whitepaper on EDGE


I am a staunch supporter of Sony and I am appreciative of the fact that they have put in a lot of money and time to deliver a machine which is truly outstanding . Every day I read about the Cell B.E breaking new ground , proving again that the PS3 has a lot of horsepower which is currently underutilized . This is an argument I hear from the fanboys a lot , they say that the PS3 packs more power , has the next generation blu-ray drive and has the mother of all CPU’s in the Cell, and in the next two years it will kick everybody’s butt from the XBox360 to the Nintendo Wii and even HD-DVD . I seriously hope so because I wish well for the PS3 , but consider this ..
The PS2 currently has an install base of around 110 million and that is a monstrous number , if the PS3 had been launched alongside the Xbox 360 as a competing console in the same price range rather than a year later , The number of Xbox 360 ‘s sold would have been less than a million till date.I am sure 90% of the people who own an Xbox360 now have owned PS2’s before , and they bought their 360’s because there was nothing available in the market which could compete with it, If they had a better option from Sony they would have bought that. So was it a good decision to include the blu-ray player ? it would have been if Sony could deliver it on time and at the same price range of a 360 . but once they realized that it would not match the competition from the cost perspective and also would be late to the market , they should have gone with the standard dual layer DVD format , once they had the blu-ray diode mess sorted out they could have launched another version with the blu-ray for a higher price point . All this time without a competitor has given MS a good install base which third party developers just can’t neglect . If multiple platforms for DMC4 can move another 1 million units for Capcom then it makes sense to them financially, because in the end its all about the money .
Another factor is the cost .. here in India a PS3 costs Rs40000 ($1000) and an Xbox 360 basic costs Rs20000 ($500) , the difference is humongous , The consoles don’t sell in the millions here but if someone wants to buy one which one would it be ? any guesses !! . But I still have hope for the PS3 because all rumors point to a price drop of at least $100 , starting in June, the blue-violet laser diodes will only cost 900 yen to produce (about $8). Last year, the Blu-ray player itself was estimated to cost $125 — a good chunk of that was probably the diodes.
And that brings me to the last and most important factor – content , the DS sells far more than the superior PSP because it has excellent content available for it , the iPod sold 100 million because Apple has developed an entire ecosystem around the product , and likewise PS2 sold millions because of the content which came out for it over the years . When the demand for PS3 content increases , the PS3 will start to sell more.

I wouldn’t count the PS3 out because its an amazing piece of hardware and has had a good reception in most places it has launched , I won’t be surprised if it indeed emerges as the next gen leader

nVidia has strapped on 2 Quadro FX5600 GPU’s ( OpenGl version of the mighty 8800GTX) onto thier already impressive line of Quadro plex VCS ( Visual computing system) . Looks like they are getting ready to take on the emerging GPCPU market as the new workstations support GPU computing with nVidia’s CUDA programming.
GPCPU has become a lucrative market for the future and a lot of big names are pouring money into it , recently Rapidmind , a startup grabbed $10 million of VC money for their GPCPU platform, keeping in pace nVidia is pushing its graphics cards hard as a platform for massively multi-threaded processing applications.
With the new GPU’s, the total frame buffer goes up to 3Gb (1.5 Gb per GPU) FSAA (full screen anti-aliasing) goes up to 64X , and as with the 8800GTX , the GPU’s have a unified shader architecture and fully support shader model 4.0 . Now comes my favorite part – performance stats 64x SLI FSAA,16 synchronized output channels,8 HD SDI channels,60 billion pixels/sec fill rate,1 billion triangles/sec geometry performance and is able to handle up to 148 megapixels display walls. and you can have many of these Quadro Plex boxes in your visualization cluster for scalability. Oh ! I forgot to mention they cost $18000 a piece.