yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Computing
Conference & Expo
November 2-4, 2009 NYC
Register Today and SAVE !..

2008 West
Data Direct
SOA, WOA and Cloud Computing: The New Frontier for Data Services
Red Hat
The Opening of Virtualization
User Environment Management – The Third Layer of the Desktop
Cloud Computing for Business Agility
CMIS: A Multi-Vendor Proposal for a Service-Based Content Management Interoperability Standard
Freedom OSS
Practical SOA” Max Yankelevich
Architecting an Enterprise Service Router (ESR) – A Cost-Effective Way to Scale SOA Across the Enterprise
Return on Assests: Bringing Visibility to your SOA Strategy
Managing Hybrid Endpoint Environments
Game-Changing Technology for Enterprise Clouds and Applications
Click For 2008 West
Event Webcasts

2008 West
Get ‘Rich’ Quick: Rapid Prototyping for RIA with ZERO Server Code
Keynote Systems
Designing for and Managing Performance in the New Frontier of Rich Internet Applications
How Can AJAX Improve Homeland Security?
Beyond Widgets: What a RIA Platform Should Offer
REAs: Rich Enterprise Applications
Click For 2008 Event Webcasts
In many cases, the end of the year gives you time to step back and take stock of the last 12 months. This is when many of us take a hard look at what worked and what did not, complete performance reviews, and formulate plans for the coming year. For me, it is all of those things plus a time when I u...
Economic Value of Data (EvD) Challenges | @BigDataExpo #BigData #Analytics
Data has a direct impact on an organization’s financial investments and monetization capabilities

Well, my recent University of San Francisco research paper “Applying Economic Concepts To Big Data To Determine The Financial Value Of The Organization’s Data And Analytics Research Paper” has fueled some very interesting conversations. Most excellent! That was one of its goals.

It is important for organizations to invest the time and effort to understand the economic value of their data because data has a direct impact on an organization’s financial investments and monetization capabilities. However, calculating economic value of data (EvD) is very difficult because:

  • Data does not have an innate fixed value, especially as compared to traditional assets, and
  • Using traditional accounting practices to calculate EvD doesn’t accurately capture the financial and economic potential of the data asset.

And in light of those points, let me share some thoughts that I probably should have been made more evident in the research paper.

Factoid #1:  Data is NOT a Commodity (So Data is NOT the New Oil)
Crude oil is a commodity. West Texas Intermediate (WTI), also known as Texas light sweet, is a grade of crude oil used as a benchmark in oil pricing. This grade is described as light because of its relatively low density, and sweet because of its low sulfur content.  WTI is a light crude oil, with an API gravity of around 39.6, specific gravity of about 0.827 and less than 0.5% sulfur[1].

And here’s the important factoid about a commodity: every barrel of Texas light sweet is exactly like any other barrel of Texas light sweet. One barrel of Texas light sweet is indistinguishable from any other barrel of Texas light sweet. Oil is truly a commodity.

However, data is not a commodity. Data does not have a fixed chemical composition, and pieces of data are NOT indistinguishable from any other piece of data. In fact, data may be more akin to genetic code, in so much as the genetic code defines who we are (see Figure 1).

Figure 1: Genetic Code

Every piece of personal data – every sales transactions, consumer comment, social media posts, phone calls, text messages, credit card transactions, fitness band readings, doctor visits, web browses, keyword searches, etc. – comprises another “strand” of one’s “behavioral genetic code” that indicates one’s inclinations, tendencies, propensities, interests, passions, associations and affiliations.

It’s not just the raw data that holds valuable strains of our “behavioral genetic code”, the metadata about our transactional and engagement data are a rich source of insights into our behavioral genetic code. For example, look at the metadata associated with a 140-character tweet. 140 characters wouldn’t seem to be much data. However, the richness of that 140-character tweet explodes when you start coupling the tweet with all the metadata necessary to understand the 140-characters in context of the conversation (see Figure 2).

Figure 2: “Importance of Metadata in a Big Data World”

The Bottom-line:
Data is not a commodity, which makes determining the economic value of data very difficult, and maybe even irrelevant, using traditional accounting techniques. Which brings us to the next point…

Factoid #2: Can’t Use Accounting Techniques to Calculate Economic Value of Data
The challenge with using accounting or GAAP (generally accepted accounting principles) techniques for determining the economic value of data is that accounting uses a retrospective view of your business to determine the value of assets. Accounting determines the value of assets based upon what the organization paid to acquire those assets.

Instead of using the retrospective accounting perspective, we want to take a forward-looking, predictive perspective to determine the economic value of data. We want to apply data science concepts and techniques to determine the EvD by looking at how the data will be used to optimize key business processes, uncover new revenue opportunities, reduce compliance and security risks, and create a more compelling customer experience. Think determining the value of data based upon “value in use” (see Table 1).

Accounting Perspective Data Science Perspective
Historical valuation based upon knowing what has happened Predictive valuation based upon knowing what is likely to happen and what action one should take
Value determination based upon what the organization paid for the asset in the past Value determination based upon how the organization will monetization the asset in the future
Valuations are known with 100% confidence based upon what was paid for the asset Valuations are based on probabilities with confidence levels dependent upon how the asset will be used and monetized
Value determination based upon acquisition costs (“value in acquisition”) Value determination in use based upon how the data will be used (“value in use”)

Table 1:  Accounting versus Data Science Perspectives

This “value in use” perspective traces its roots to Adam Smith, the pioneer of modern economics. In his book “Wealth of Nations,” Adam Smith[3] defined capital as “that part of a man’s stock which provides him a revenue stream.” Adam Smith’s concept of “revenue streams” is consistent with the data science approach looking to leverage data and analytics to create “value in use”.

We have ready examples of how other organizations determine the economic value of assets based upon “value in use” starting with my favorite data science book – Moneyball.  Moneyball describes a strategy of leveraging data and analytics (sabermetrics) to determine how valuable a player might be in the future. One of the biggest challenges for sports teams is to determine a player’s future value since player salaries and salary cap management are the biggest management challenges in sports management. Consequently, data science provides the necessary forward-looking, predictive perspective to make those “future value” decisions.

Sports organizations can not accurately make the economic determination of a player’s value based entirely on their past stats. To address this challenge, basketball created Real Plus-Minus (RPM)[4]. Real Plus-Minus is a predictive metric (score) that is designed to predict how well a player will perform in the future.

The Bottom-line:
We need to transition the economic vale of data conversation away from the accounting retrospective of what we paid to acquire the data, to a data science predictive retrospective of how the data is going to be used to deliver “value in use.”

Economic Value of Data Summary
Data is an asset that can’t be treated like a commodity because:

  1. Every piece of data is different and provides unique value based upon the context (metadata) of that data, and
  2. Traditional retrospective (accounting) methods of determining EvD won’t work because the intrinsic value of the data is not what one paid to acquire the data, but the value is in how that data will be used to create monetization opportunities (“data in use”).

To exploit the economic value of data, organizations need to transition the conversation from an accounting perspective (of what has happened) to a data science perspective (on what is likely to happen) on their data assets. Once you reframe the conversation, the EvD calculation becomes more manageable, more understandable and ultimately more actionable.


[2] Edited by Seth Miller User:arapacana, Original file designed and produced by: Kosi Gramatikoff User:Kosigrim, courtesy of Abgent, also available in print (commercial offset one-page: original version of the image) by Abgent – Original file: en:File:GeneticCode21.svg, Public Domain,

[3] “Wealth of Nations”,


The post Economic Value of Data (EvD) Challenges appeared first on InFocus Blog | Dell EMC Services.

Read the original blog entry...

About William Schmarzo
Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Dell EMC’s Big Data Practice.

As a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

SOA World Latest Stories
"We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability...
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration a...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optim...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is founda...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketin...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)!

Advertise on this site! Contact advertising(at)! 201 802-3021

SYS-CON Featured Whitepapers