May 2007


The guys at The Enquirer seem to have some dough about the next generation GPU from  nVidia , In an analyst webcast, Nvidia Investor Relations and Communications VP Michael Hara stated the top end card based on the G92 graphics processor will be ready for Christmas and that it will have computing power close to 1 TeraFLOP .
Time for some cools stats now , Intel’s latest quad core Core 2 extreme QX6700 runs at 2.6 GHz and has a peak floating point performance of 50 GFLOPS, while nVidia’s G80 ( powers the 8800GTX)  has a peak floating point performance of 330 GFLOPS, AMD’s R600 can do a max 450 GFLOPS  and the STI Cell B.E does close to 250 GFLOPS at max throughput . So if the G92 rumor holds it would mean that the G92 will outperform the cell by around 4 times and its predecessor G80 by about 3 times  in terms of computing power.
One problem with GPU’s  is the inability to support 64 bit floating point operations, which is necessary for almost all supercomputing applications . On the other hand GPU’s are far cheaper than supercomputer vector processors making their use in HPC( High performance computing) justified . nVidia had promised FP64 support on GPUs by late 2007 , So is G92 the promised chip , if so its going to have a big impact on the GPGPU movement and on HPC in general.
One little concern for me is the lack of titles for the PC which can take advantage of such horsepower , Crysis is a contender and so is SC-Conviction . TeraFLOP or not it’ll be interesting to see what kind of performance can the G92 can dish out.

Advertisements


First the big news , BT and Sony computer entertainment Europe(SCEE) have signed a four year deal to transform the PSP by adding wireless broadband communications functions including high quality video and voice calls and instant messaging. At first, only PSP-to-PSP calls will be supported but this will be soon followed by the ability to support calls and messages between PSPs, computers, regular phones and mobiles. The service will first be introduced in the UK and the rest of Europe . Surprisingly no details were released on when the service will be available in the US which has a PSP user base of 7.4 million . I guess this is Sony’s answer to the Nintendo DS , which is set to break all sales records for a gaming hand held before its time runs out , but there are a hundred reasons why this strategy won’t work at least with the current avatar of the PSP . Here are my top 5 reasons

  • Anybody who’s seen or held the PSP knows how big it really is, with its large screen and dedicated gaming buttons , its not something you would want to be seen talking into.
  • There are already so many complaints about PSP’s battery life and with video and voice calls , its anybody’s guess how long the battery would last.
  • Video calls will need a cam , and with Wi-fi the new hardware coming into the PSP will have some implications on the size of the device , will it get bulkier or is Sony going to rethink their whole design strategy for the PSP and release it as a new model with the current PSP existing as a low price model.
  • The D pad and associated buttons cant be used for messaging , of course they know that , does that mean more buttons or or rather a small alphanumeric keypad . I think touchscreen would be better , what impact will that have on the cost and size of the next gen PSP.
  • Last but not the least, the cost. Just how much is this gaming device cum phone cum Wi-Fi internet device cum media player going to cost , not as much as the PS3 I hope 🙂

What could be done otherwise is to improve the PSP as what it was meant to be , a portable gaming and media device : improve the battery life , provide descent internal storage, better content delivery and for god’s sake drop that damn UMD drive.

From the recent announcements of the chip majors , it seems the catch up games that they have been playing with each other are over.
What am I talking about ? First it was all about pushing clock speeds up , and more recently it’s been about putting more cores on a chip more efficiently than the other . But as the multicore architectures become more mature , chip majors like Intel and AMD have started to diverge on their multicore strategies . Intel is moving towards homogeneous cores while AMD and IBM look to be adopting a more heterogeneous approach .

Homogeneous as the word implies means all the processing units (cores) on a certain processor are of the same type and divide the workload between them for maximum efficiency . The heterogeneous approach is a little more complicated as it may be implemented in a lot of ways , different size cores for different dedicated functions is one approach . Another approach is where a problem’s workload is split between a general-purpose processor and one or more specialized, problem-specific processors. Heterogeneous computing is a broader research area and the concept has been around for a while now, it also encompasses efforts like GPGPU computing , the Clearspeed accelerators and more recent efforts like the AGEIA PHYSX co processor .The most recent and the best example of a heterogeneous processor would be the STI Cell B.E with its Power PC core and the 8 synergistic processor units . AMD has plans for a similar architecture and their first heterogeneous multicore offering may be a CPU and GPU in the client space.

On the software side ,task-level parallelism and workload partitioning continue to be the dominant issues for multi-core platforms for both heterogeneous and homogeneous architectures. These issues will be more acute on heterogeneous multi-core systems, since the specialized processors will throw up a new set of problems. I believe heterogeneous computing is geared towards extreme performance computing , the GPGPU movement and the Cell’s performance with folding@home proves this point. General purpose computing might go the homogeneous way , as the challenges are far lesser and are fast being resolved .

I happened to stumble upon this at tgdaily . Its called the Optimus Maximus keyboard , and it is developed by a Russian company called Art Lebedev studio. Now for the cool part of the story , every key of the Optimus Maximus keyboard is a stand-alone display showing the function it is currently associated with .

Other disturbing features of this obscenely expensive piece of hardware are as follows, a standalone LED in each of its 113 keys. Each LED has a size of 10.1 x 10.1 mm and offers a resolution of 48 x 48 pixels. apparently the keys can not only display images, but videos with frame rates of up to 10 fps as well. up to 65,536 colors are supported, which can be seen on viewing angles of up to 160 degrees. Image and video layouts are stored on SD cards, which can be inserted on the back of the keyboard.

Hmmm.. really impressive , if you think about it , its quite an achievement but again , how many computer users ever look down at their damn keyboards while typing . I don’t know many , and for $1500 i ll build a kick ass rig complete with disco lights . Instead of this why not spend all the effort in making keyboards more ergonomic or durable , or try to put in features which make it easier for people with disabilities . This goes up right up there with the Finger Nose Hair Trimmer in my list of utterly useless products . Anyone who can give me one good reason to buy the optimus maximus gets a candy. any takers ??

May 21 saw the annual PCI Special Interest Group’s developer’s conference in San Jose , California . It seems that the move to PCI-E 2.0 is going to happen very soon with a lot of major players showing off PCI-E 2.0 technology at the conference . For the uninitiated PCI-E 2.0 has been in development for some time now and aims to double the interconnect bit rate from 2.5 GT/s to 5 GT/s . It effectively increases the aggregate bandwidth of the 16-lane link to approximately 16 GB/s.
Intel who promised to launch a PCI-E 2.0 motherboard before 2008 rolls in demonstrated unreleased AMD and NVidia graphics chips on its Stoakley chip set for workstations which offers two PCI-E 2.0 ports supporting 16 parallel lanes each. Majors like ARM, LSI, NEC and Synopsys also showed off their PCI-E 2.0 technology at the conference .
Intel is expected to release its first chipsets supporting PCIe 2.0 in the second quarter of 2007 with its ‘Bearlake’ family. AMD will start supporting PCIe 2.0 with its RD700 chipset series and NVIDIA with their MCP72 chipset .The PCI SIG is already working to define a version 3.0 for Express that could appear in products in late 2009. it will probably target 8 or 10 G transfers/second.

So come 2008, get ready to embrace express 2.0 as the new standard , also gear up for faster , high performance graphics cards which eat will eat up 300W of power on the Express 2.0 specification . How fast will the transition happen ? , looking at the merciless move from PCI to AGP and then to PCI express , I would say soon … very soon 🙂

I am back , after an amazing trip to Goa and a few trips to the hospital owing to a horrendous bout of viral fever . I really needed this break to get away from my boring daily schedule and get some time for myself . I guess it worked cause my energy levels are on an all time high. So its back to the usual now , at least for the next two months cause then I move on to do my masters.