Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Computing
Conference & Expo
November 2-4, 2009 NYC
Register Today and SAVE !..

2008 West
DIAMOND SPONSOR:
Data Direct
SOA, WOA and Cloud Computing: The New Frontier for Data Services
PLATINUM SPONSORS:
Red Hat
The Opening of Virtualization
GOLD SPONSORS:
Appsense
User Environment Management – The Third Layer of the Desktop
Cordys
Cloud Computing for Business Agility
EMC
CMIS: A Multi-Vendor Proposal for a Service-Based Content Management Interoperability Standard
Freedom OSS
Practical SOA” Max Yankelevich
Intel
Architecting an Enterprise Service Router (ESR) – A Cost-Effective Way to Scale SOA Across the Enterprise
Sensedia
Return on Assests: Bringing Visibility to your SOA Strategy
Symantec
Managing Hybrid Endpoint Environments
VMWare
Game-Changing Technology for Enterprise Clouds and Applications
Click For 2008 West
Event Webcasts

2008 West
PLATINUM SPONSORS:
Appcelerator
Get ‘Rich’ Quick: Rapid Prototyping for RIA with ZERO Server Code
Keynote Systems
Designing for and Managing Performance in the New Frontier of Rich Internet Applications
GOLD SPONSORS:
ICEsoft
How Can AJAX Improve Homeland Security?
Isomorphic
Beyond Widgets: What a RIA Platform Should Offer
Oracle
REAs: Rich Enterprise Applications
Click For 2008 Event Webcasts
In many cases, the end of the year gives you time to step back and take stock of the last 12 months. This is when many of us take a hard look at what worked and what did not, complete performance reviews, and formulate plans for the coming year. For me, it is all of those things plus a time when I u...
SYS-CON.TV
Latency Beyond Throughput | @CloudExpo #DataCenter #Storage #SSD #SCM
Consider the analogy of a highway. If it is a one-lane road. Latency is one vehicle making a round-trip between two end points

Storage is moving to flash, and flash is getting faster, so people keep asking me why I keep talking about latency as if there is a problem.  Isn't faster flash going to just make everything faster?  Won't "the rising tide lift all boats"?

Flash media as a storage media is indeed "faster" than the spinning hard-disks we've all been using for decades.  But when it is used to simulate a hard disk, as is the case with SSD products, there are software layers which prevent it from reaching its full potential.  That explanation always gets heads nodding, because it is obvious.  But what about when the flash media is not simply packaged into an "SSD" and connected over SATA or SAS, but instead can be addressed via NVMe over PCIe?  Doesn't that make the problem of hard disk drive emulation go away?

Not entirely.  For one thing, in some cases the SSD abstraction is maintained despite being connected via PCIe.  Faster than SAS or SATA, but still those software speed bumps to keep data from driving too fast through the storage parking lot.

But let's suppose that you have a flash card designed specifically for NVMe and allowing more sophisticated memory-addressing software mechanisms to unleash its greater potential.  And what about the SCM (Storage Class Memory) products coming to market blurring the distinction between DRAM and non-volatile media previously relegated to the storage layer?  Hasn't hardware solved the performance problem?

I wish it were so.  But it comes back to the subtle distinction between latency and throughput, and what it means to the software inside the kernel.

At a high level, it's easy to think of latency as the inverse of throughput (and vice versa).  For a simple, single-threaded series of operations, that should be literally true.  But it is more complicated when you have many operations occurring in parallel.

Consider the analogy of a highway.  If it is a one-lane road. Latency is one vehicle making a round-trip between two end points. Let's imagine a 4-passenger car traveling on a single-lane road for 50 kilometers across a desert between two depots. Each round trip is an event and its completion time is its latency.  If we want to improve its latency we could make the car faster.

Latency - A Faster Car
If we want to improve its throughput we could make the car bigger - replace it with a huge, slow bus - but the net result would be worse latency with more passenger round trips for greater throughput.  We could further increase throughput by building more lanes on the highway and running more vehicles.  Optimally, we would make all the vehicles faster, whether they became even larger trains of trailers pulled by a truck or swarms of speedy motorcycles spreading out over the ever-multiplying number of new lanes.

Throughput - Many Jammed Lanes
The point here is that we really have three performance dimensions to consider:

  1. How quick is the round trip for any one passenger? That is latency.
  2. How many passengers in aggregate per unit time? That is throughput.
  3. How many independent events (vehicles)? That is accesses.

Enabling more accesses will be a natural consequence of lower latency, because the number of lanes is fixed (in the analogy) and the number of queues is finite (applying the analogy to software).

To optimize the use of emerging hardware technology we shouldn't merely rely on building more lanes in the highway (increasing throughput potential via flash media capability).  We should also be making the vehicles faster (improve latency).  Enhancing the hardware is in the hands of Intel, Micron, Samsung, and all the rest of the players in that space.

Simply making use of the bigger/faster/cheaper nonvolatile "flash" hardware components coming to market in wave after wave of impressive innovation is straightforward. Everybody in the storage industry is doing it.  And adding more of it is like adding lanes to the highway to get more throughput.

But doing something meaningful about latency is not easy.  It's hard.  It means rethinking the fundamentals, changing the innards of the kernel, ripping out cruft with both hands and designing new streamlined code to handle storage I/O for the 21st century.

About Amit Golander
Dr. Amit Golander is the Chief Technology Officer (CTO) and R&D Manager for Plexistor. His responsibilities are to develop the product and work with CEO, Sharon Azulai, on the vision for the technology and products.

Golander brings a rich research, development, and leadership background to Plexistor where he has distinguished himself in both the corporate, startup and higher educational realms.In addition to his work in the business and academic sectors, Golander holds over 50 patents and has published a number of technology articles in prestigious engineering journals.

Prior to Plexistor, Golander was VP of Systems and Product for Primary Data where he was responsible for strategic partnerships, alliances and beta customers as well as worked closely with the R&D teams on the day-to-day product management.Golander also worked for IBM for over twelve years on data center and cloud infrastructure.

Golander has also mentored M.Sc. students and taught computer architecture and quantitative analysis at Tel Aviv University.

Golander received his B.Sc. in C.S.and EE and his Ph.D. from Tel Aviv University in the field of computer architecture. His thesis won the Intel Research Award. Prior to his academic studies, Golander served as an intelligence officer in the Israeli Defense Force (IDF).

SOA World Latest Stories
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for pe...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portabil...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die...
If your cloud deployment is on AWS with predictable workloads, Reserved Instances (RIs) can provide your business substantial savings compared to pay-as-you-go, on-demand services alone. Continuous monitoring of cloud usage and active management of Elastic Compute Cloud (EC2), Relation...
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development ...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns hel...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021


SYS-CON Featured Whitepapers
ADS BY GOOGLE