Search This Blog

Monday, December 28, 2009

Is it worthwhile to buy an e-reader ?

Well, I am writing this blog after a long break. It’s the Christmas holidays and there is no particular plan, so thought it worthwhile to update my blog, though this update will be a deviation from my usual blog of IT infrastructure. I am fond of books and always carry my favorite books where ever I travel, my house is piling up with stocks of books . I do read whenever I find time, in the tube, train, bus . Recently, I started to digitalize my library and would prefer to buy an e-book rather than the paper editions, it's cheaper and easy to carry and can also be used on my laptop. I thought it’s time to invest in an eBook reader. I did a bit of  research on the two popular models in UK, the Amazon Kindle and the sony e-book reader. E-readers are becoming increasingly popular, some 5m e-readers were sold worldwide in 2009 . Almost all the e-readers use  similar display technology E Ink. It’s based on tiny capsules filled with positively charged white particles and negatively charged black particle, suspended in a clear liquid. Transparent electrodes placed above and below the layer of microcapsules create electric fields, and based on the polarity pushes either the white or black to the surface and provide the display. It has many advantages , as it does not have a back light , so can be read in the bright sun light and battery is only used at the time of a  page turn , and hence has a very extended battery life . Finally, on Boxing Day I went to the Sony center and bought the Sony touch screen edition, costing me £ 249. Amazon Kindle is similar, except it's always getting connected to Amazon whisper net, through where books can be bought from Amazon online, and delivered to the kindle in 60 seconds.  On reaching home, with much enthusiasm I loaded one of the books from my digital library into the e-reader. This particular book was in the pdf format with charts and diagrams. However, in the e-reader the font came out too small to read, so I tried increasing  the font from small to medium, though the font became now readable but all the images and tables were gone. I tried changing the orientation to landscape mode, but still in the original font it was not quite readable. Did some more research on the net and found out that  both Sony e-reader and Amazon Kindle has problems with pdf formats, though both included in their latest editions of e reader. Sony scored a point over kindle in the pdf reader. However, both have major disadvantages. Sony e-reader pdf fonts can be increased but images and tables will get distorted. In case of Kindle this feature is not there at all, though in the landscape mode it's quite efficient in reducing the border white spaces, but still fonts will not be comfortable to the eyes. The books in the Sony proprietary format lrf and the open format epub is quite okay. However, somehow I did not feel it’s worthwhile to have it at the present state of maturity and retuned it  the next day. I will like to see the following maturity in this technology before I invest it again.

  • Fast display in the page turn , it took couple of seconds before the next page appeared .


  • Color display like in LCD , however, LCD is battery hungry and not easy on eyes , another option is to use organic light emitting diodes , but still quite expensive . Eink and some of the rivals are currently developing color electrophorectic display by adding a layer of coloured filter above the black and white capsules. There are challenges in this technology in making the filters and control system small enough. Current projection is in 2010, companies will overcome this problem and will be able to produce the color display. It will be some variant of current LCD technology to provide the color display in the e-reader.


  • Good support for PDF and ePub formats.


I will wait until end of 2010, before I buy an e-reader, until that time good old paper books and my laptop for the e-books.

Friday, February 22, 2008

Cloud Computing , is it a Hype or Future of Data Centers ?

I was reading the press release about IBM Blue cloud initiative, though the press release was old. After reading I start doing some research about the Cloud computing, and came across with some wonderful organization providing this service and came to know the technologies behind it. Though its bit early, it reinforces my believe that we are going towards Utility based computing. Service providers who don't have planned or a vision on this is surely going to suffer in the next couple of years.
What is cloud computing?
I have cut to paste this definition from wikipedia
“Cloud computing is a new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources, rather than having local servers or personal devices handling users' applications “
In the heart of cloud computing, there is virtualisation. A service provider provides a virtualized computer resource to a consumer through the network/ internet. Hosting service providers which provides virtualized images of server can be also called termed as cloud service provider. However, as they move up the value chain, we find providers with grid of thousands of hardware provides these infrastructure utility services as a web interface . Its irrespective which physical server your application is using . It will only guarantee resources like CPU, RAM, Storage. Next time you reboot your application it can get transferred from London to Hong Kong . Most of them use Mapreduce , Hadoop for processing large set of data . They break up the code into many small chunks, so that it can be distributed in parallel into thousands of computers . It all started in Google, as Mapreduce and then eventually taken into apache projects and produce Hadoop . It's an open source Java based frame work now . IBM is using Hadoop in its blue cloud initiatives.
Amazon web services EC2 elastic compute cloud services is building around Xen virtualized images. These services can get us rid of Hardware. Setup is straight forward for simple websites . But for a bit complicated infrastructure all the configuration works needs to be done, needs the assistance of system administrator. EC2 uses Amazon S3 for storage. Recently, S3 was in the news for all the wrong reason as it was down for a couple of hours. Serious corporate customer will be hesitating to use this kind of services . Yahoo, Google all planning to provide these services.

While doing some Internet research I came across with a company called 3Tera. It takes a holistic approach and provides a wonderful service. It allows the customers to create its own Virtual Private Data Center. 3Tera partnered with number of hardware service provider which provides commodity Hardware (50,000+). Using Applogic GRID opreating system it creates a layer of abstraction on those servers and creats virtual images . It has build it storage network based on commodity direct attached devices of these servers. According to 3Tera it provides the first GRID operating software for Web based application. It provides an AJAX based interface to configure the infrastructure virtually . In the demo 3Tera has shown how to create a virtual infrastructure comprises of Firewall, Load balancer , web servers , SQL server , NAS by simply doing drag and drop , as if someone is doing a Visio diagram . Actually, they are all virtual images of their cloud .Application instances got deployed on these virtual servers . Entire infrastructure is up in just ten minutes. I believe this will be the future of Infrastructure Providers. I strongly recommend you to see the demo yourself.

Wednesday, February 20, 2008

Convergence of SOA and GRID

If we want to predict the future we need to have a look, back in history. Industrial history shows that Grid Computing is inevitable. With the advent of grid based electricity all the turbines, steam engines that use to power up individual factories got replaced by external service provider for reliable source of electricity and on demand capacity . Same is bound to happen for Computational needs. Grid was around the corner for quite some time, but it was not gaining widespread corporate acceptance except for highly specialized computing intensive task like simulation or analytic works. It was used more as a high performance computing (HPC) The reason being applications were not geared up to take the advantage of GRID. Now with the rise of Service Oriented Architecture this barrier is slowly diminishing. GRID and SOA are on a convergence mode. The latest version of open source tool kit for GRID computing Globus tool kit 4.0 is based on web services. We can articulate GRID as an application which coordinates resources (CPU, Storage) between different nodes. There is a scheduler / coordinator /management Node, which schedules workload or jobs between different nodes, which uses the resources of those nodes, and gives the desired results back to the coordinator. There are a variety of methods to communicate between the Coordinator and nodes, but most system currently make use of web services. The GRID it self is currently based on SOA principle. Second earlier the enterprise applications were built with tightly coupled components and same application performing many functions. The deployment of these in a grid was a challenge. However, as service oriented architecture starting to become a main stream phenomenon, where the application is broken into components these caters to different services. Standard interfaces are built into these objects and methods to remotely access these particular services. It does not matter where these components are residing in the network. These applications can be become ideal candidates for grid based system , if the resource requirement is more then a single server can offer. With GRID it can be cost effectively spread across multiple servers , which will provide both computational power as well as eliminating the single point if failure. Amazon S3 storage service is based on web services, deployed in a grid environment. Each has its powerful advantages, SOA advantages are well known by now, and GRID creates an abstraction layer around the whole computing Infrastructure. However, in reality the adaptation level of both GRID and SOA combined in enterprises are quite low at this moment. According to one research organization with taking small steps in SOA the adaptation of GRID is high , but in organization where there are full SOA implementation , GRID adaptation is quite low . Nevertheless, the benefit of the combined will be hard to be ignored by the enterprises . GRID will pave the way for utility computing . Will SOA and GRID be poised to set a new Revolution or it'ss just another IT hype cycle . We need to wait and see ,but the concept /architecture behind the GRID will prevail . It might have gotten merged with today's much hyped CLOUD computing . I will write my next Blog about the Cloud computing .

Tuesday, February 19, 2008

TOGAF and ITIL V3.0

TOGAF and ITIL both focus on Business and IT integration. ITIL address the need of service management and TOGAF address the concern of Enterprise architecture. Both should be taken as complementary framework, rather than attempting to address EA and service management differently. As a matter of fact, ITIL took an appropriate step forward and also looks at the operation of the Foundation of execution (Enterprise Architecture) . Rather than writing a lengthy blog which nobody will read, considered I will try to explain this through the following diagram. “A picture tells a thousand word" .Click on the image to get a better view


Sunday, February 17, 2008

Infrastructure Outsourcing V2.0

The rise of on demand utility services is on the horizon . We have already seen many successful models for software as a service (SaS) . The same will be extended to traditional infrastructure services such as storage on demand and processing on demand . Amazon already started storage on demand services . Amazon through its S3 storage service solution provides 1 GB of space for 15 Cents per month . Amazon storage is accessed by a standard SOAP and REST interface and networking is handled by HTTP and bit torrent protocol . Amazon infrastructure is building on inexpensive commodity hardware , and as more nodes got added overall reliability got increases as there is no single point of failure . Reason why Amazon can provide highly reliable and cost effective services . Similarly Amazon EC2 Elastic Compute Cloud which provide entire compute power as a webservices (CPU , Memory , Storage, Network), this is still in beta .
Traditional hardware vendors are taking a different approach in this regard , as an example HP’s pay per use storage service . HP will install the storage device at the customer premise based on the need . Each month the customer will be charged for average capacity used in addition to a minimum percentage of installed capacity . Though it can't be termed as a true utility service as Amazon.
Similarly, for on demand processing . It reached a stage of maturity .The prominent example is the BNP Paribas on demand processing contract with IBM . The contract allows BNP Paribas to access the capacity of 2,500 Blade Center services with provision of double the service if required . This service is provided through IBM’s Deep Computing Capacity on Demand Centre which has upto 13,000 processors (Intel ,AMD,IBM) .
Gartner’s May 2007 poll shows growing uptake of this such utility service . A total of 27 % of 120 Client organisations are now using some form of Infrastructure Utility and 89 % expect to do so .

Saturday, February 16, 2008

Notes from Storage Expo

Even though it is quite late , I thought its worthwhile to note few things from the London Storage Expo held last October . . Normally, I avoid big names like Netapp , EMC, IBM ,HP and look out for innovative small companies in any expo . In a corner a small stall caught my attention , it was Lefthand networks (http://www.lefthandnetworks.com/ ) . After a initial chat with an engineer , I could see the potential and innovation of their solutions . They have developed a software call SAN/IQ , which clusters storage nodes together, aggregating all of these resources into a single, larger storage system. The storage nodes comprise of standard X86 based systems , based on iSCSI protocol .Each SAN/iQ cluster responds to a single IP address, and every storage node in that cluster participates equally in sharing both the workload and the capacity of the whole cluster . There is a single centralised management for the entire cluster , synchronise replication for multiple location SAN implementation , remote copy asynchronous replication ,Snapshot . They have integrated quite well with Vmware . I can see potential of their solution as a cost effective solution for the SME sector . My only concern is the performance for high I/O intensive Online Transaction Processing system .
Second stall which drew my attention was of Storwize (http://www.storwize.com/) . It's of real -time data compression , which compresses data as they transmitted between a host and Storage device . What they claim is around 95% storage capacity gain for database file and 65% across other data types . I was sceptical about the performance , but I believe it will compensate the overhead related to compression with the less data to be written on disk subsystem.

Friday, February 15, 2008

Enterprise Architecture Maturity Level

IT Landscapes of an organization opens the window to the Enterprise architecture maturity of that particular organization. Going by the definition of the classic book " Enterprise Architecture as Strategy " (http://www.architectureasstrategy.com/) organization having multiple applications catering to local business functions without any technology standardization can be labeled as a maturity level one or business silo . Organization having technology standards and shared infrastructure can be termed as maturity level two or Standardized technology. While an organization moves from level one to level two, there will be significant reduction of IT cost due to fewer platforms to support. Organization with enterprise view of data and reduced data redundancy can be termed as maturity level 3. Investments change from local application to enterprise application and shared data. This can be also termed as an optimized core. When an organization got modular applications build on the optimized core it can be termed as maturity level four or Business Modularity. Through ‘Web services ‘ organization creates reusable business services at this level or create front end processes which connect to core data and backend processes. An Organization needs to move one level to another sequentially, jumping levels will not be a successful strategy as shown by the authors of the above mentioned book. This reflect perfectly in one of my recent interaction with an organization which was at a classic case of level one maturity. Management tries to push for a process standardization through an ERP system (optimized core) without a technology standardisation phase . There were political resistance, chaos and the entire project was heading towards failure.