Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Computing
Conference & Expo
November 2-4, 2009 NYC
Register Today and SAVE !..

2008 West
DIAMOND SPONSOR:
Data Direct
SOA, WOA and Cloud Computing: The New Frontier for Data Services
PLATINUM SPONSORS:
Red Hat
The Opening of Virtualization
GOLD SPONSORS:
Appsense
User Environment Management – The Third Layer of the Desktop
Cordys
Cloud Computing for Business Agility
EMC
CMIS: A Multi-Vendor Proposal for a Service-Based Content Management Interoperability Standard
Freedom OSS
Practical SOA” Max Yankelevich
Intel
Architecting an Enterprise Service Router (ESR) – A Cost-Effective Way to Scale SOA Across the Enterprise
Sensedia
Return on Assests: Bringing Visibility to your SOA Strategy
Symantec
Managing Hybrid Endpoint Environments
VMWare
Game-Changing Technology for Enterprise Clouds and Applications
Click For 2008 West
Event Webcasts

2008 West
PLATINUM SPONSORS:
Appcelerator
Get ‘Rich’ Quick: Rapid Prototyping for RIA with ZERO Server Code
Keynote Systems
Designing for and Managing Performance in the New Frontier of Rich Internet Applications
GOLD SPONSORS:
ICEsoft
How Can AJAX Improve Homeland Security?
Isomorphic
Beyond Widgets: What a RIA Platform Should Offer
Oracle
REAs: Rich Enterprise Applications
Click For 2008 Event Webcasts
In many cases, the end of the year gives you time to step back and take stock of the last 12 months. This is when many of us take a hard look at what worked and what did not, complete performance reviews, and formulate plans for the coming year. For me, it is all of those things plus a time when I u...
SYS-CON.TV
A Cloudy Future for Networks and Data Centers in 2010
The message from the US Government CIO was clear

The message from the VC community is clear – “don’t waste our seed money on network and server equipment.”
The message from the US Government CIO was clear – the US Government will consolidate data centers and start moving towards cloud computing. The message from the software and hardware vendors is clear – there is an enormous investment in cloud computing technologies and services.

If nothing else, the economic woes of the past two years have taught us we need to be a lot smarter on how we allocate limited CAPEX and OPEX budgets. Whether we choose to implement our IT architecture in a public cloud, enterprise cloud, or not at all – we still must consider the alternatives. Those alternatives must include careful consideration of cloud computing.

Data Center within a Data Center CloudCloud 101 teaches us that virtualization efficiently uses compute and storage resources in the enterprise. Cloud 201 teaches us that content networks facing the Internet can make use of on-demand compute and storage capacity in close proximity to networks. Cloud 301 tells us that a distributed cloud gives great flexibility to both enterprise and Internet-facing content. The lesson plan for Cloud 401 is still being drafted.

Data Center 2010
Data center operators traditionally sell space based on cabinets, partial cabinets, cages, private suites, and in the case of carrier hotels, space in the main distribution frame. In the old days revenue was based on space and cross connects, today it is based on power consumed by equipment.

If the intent of data center consolidation is to relieve the enterprise or content provider of unnecessary CAPEX and OPEX burden, then the data center sales teams should be gearing up for a feeding frenzy of opportunity. Every public cloud service provider from Amazon down to the smallest cloud startup will be looking for quality data center space, preferably close to network interconnection points.

In fact, in the long run, if the vision of cloud computing and virtualization is true, then the existing model of data center should be seen as a three-dimensional set of objects within a resource grid, not entirely dissimilar to the idea set forth by Nicholas Carr in his book the “Big Switch.”

Facilities will return to their roots of concrete, power, and air-conditioning, adding cloud resources (or attracting cloud service providers to provide those resources), and the cabinets, cages, and private suites will start being dismantled to allow better use of electrical and cooling resources within the data center.

Rethinking the Data Center
Looking at 3Tera’s AppLogic utility it brings a strange vision to mind. If I can build a router, switch, server, and firewall into my profile via a drag and drop utility, then why would I want to consider buying my own hardware?

If storage becomes part of the layer 2 switch, then why would I consider installing my own SAN, NAS, or fiber channel infrastructure? Why not find a cloud service provider with adequate resources to run my business within their infrastructure, particularly if their network proximity and capacity is adequate to meet any traffic requirement my business demands?

In this case, if the technology behind AppLogic and other similar Platform as a Service (PaaS) is true to the marketing hype, then we can start throwing value back to the application. The network, connectivity, and the compute/storage resource becomes an assumed commodity – much like the freeway system, water, or the electrical grid.

Flowing the Profile to the User
Us old guys used to watch a SciFi sitcom called “Max Headroom.” Max Headroom was a fictional character who lived within the “Ether,” being able to move around though computers, electrical grids – and pop up wherever in the network he desired. Max could also absorb any of the information within computer systems or other electronic intelligence sources, andFrom the old SciFi series Max Headroom deliver his findings to news reporters who played the role of investigative journalists.

We are entering an electronic generation not too different from the world of Max Headroom. If we use social networking, or public utility applications such as Hotmail, Gmail, or Yahoo Mail, our profile flows to the network point closest to our last request for application access. There may be a permanent image of our data stored in a mother ship, but the most active part of our profile is parsed to a correlation database near our access point.

Thus, if I am a Gmail user, and live in Los Angeles, my correlated profile is available at the Google data cache with correlated Gmail someplace with proximity to Los Angeles. If I travel to HongKong, then Gmail thinks “Hmmm…, he is in HK, and we should parse his Gmail image to our HK cache, and hope he gets the best possible performance out of the Gmail product from that point.”

I, as the user, do not care which data center my Gmail profile is cached at, I only care that my end user experience is good and I can get my work done without unnecessary pain.

The data center becomes virtual. The application flows to the location needed to do the job and make me happy. XYZ.Com, who does my mail day-to-day, must understand their product will become less relevant and ineffective if their performance on a global scale does not meet international standards. Those standards are being set by companies who are using cloud computing on a global, distributed model, to do the job.

2010 is the Year Data Centers Evolve to Support the Cloud
The day of a 100sqft data center cage is rapidly becoming as senseless as buying a used DMS250. The cost in hardware, software, peopleware, and the operational expense of running a small data center presence simply does not make sense. Nearly everything that can be done in a 100sqft cage can be done in a cloud, forcing the services provider to concentrate on delivering end user value, and leaving the compute, storage, and network access to utility providers.

And when the 100sqft cage is absorbed into a more efficient resource, the cost – both in electrical/mechanical and cost (including environmental costs) will drop by a factor of nearly 50%, given the potential for better data center management using strict hot/cold aisle separation, hot or cold aisle containment, containers – all those things data center operators are scrambling to understand and implement.

Argue the point, but by the end of 2010, the ugly data center caterpillar will come out of its cocoon as a better, stronger, and very cloudy utility for the information technology and interconnected world to exploit.

Read the original blog entry...

About John Savageau
John Savageau is a life long telecom and Internet geek, with a deep interest in the environment and all things green. Whether drilling into the technology of human communications, cloud computing, or describing a blue whale off Catalina Island, Savageau will try to present complex ideas in terms that are easily appreciated and understood.

Savageau is currently focusing efforts on data center consolidation strategies, enterprise architectures, and cloud computing migration planning in developing countries, including Azerbaijan, The Philippines, Palestine, Indonesia, Moldova, Egypt, and Vietnam.

John Savageau is President of Pacific-Tier Communications dividing time between Honolulu and Burbank, California.

A former career US Air Force officer, Savageau graduated with a Master of Science degree in Operations Management from the University of Arkansas and also received Bachelor of Arts degrees in Asian Studies and Information Systems Management from the University of Maryland.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

SOA World Latest Stories
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is founda...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at ...
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that the...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, dep...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. I...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021


SYS-CON Featured Whitepapers
ADS BY GOOGLE