Skip to content

Posts tagged ‘BigTable’

15
May

New Tools for New Times – Primer on Big Data, Hadoop and “In-memory” Data Clouds


Data growth curve:  Terabytes -> Petabytes -> Exabytes -> Zettabytes -> Yottabytes -> Brontobytes -> Geopbytes.  It is getting more interesting.

Analytical Infrastructure curve: Databases -> Datamarts -> Operational Data Stores (ODS) -> Enterprise Data Warehouses -> Data Appliances -> In-Memory Appliances -> NoSQL Databases -> Hadoop Clusters

———————

In most enterprises, whether it’s a public or private enterprise, there is typically a mountain of data, structured and unstructured data, that contains potential insights about how to serve their customers better, how to engage with customers better and make the processes run more efficiently.  Consider this:

  • Online firms–including Facebook, Visa, Zynga–use Big Data technologies like Hadoop to analyze massive amounts of business transactions, machine generated and application data.
  • Wall street investment banks, hedge funds, algorithmic and low latency traders are leveraging data appliances such as EMC Greenplum hardware with Hadoop software to do advanced analytics in a “massively scalable” architecture
  • Retailers use HP Vertica  or Cloudera analyze massive amounts of data simply, quickly and reliably, resulting in “just-in-time” business intelligence.
  • New public and private “data cloud” software startups capable of handling petascale problems are emerging to create a new category – Cloudera, Hortonworks, Northscale, Splunk, Palantir, Factual, Datameer, Aster Data, TellApart.

Data is seen as a resource that can be extracted and refined and turned into something powerful. It takes a certain amount of computing power to analyze the data and pull out and use those insights. That where the new tools like Hadoop, NoSQL, In-memory analytics and other enablers come in.

What business problems are being targeted?

Why are some companies in retail, insurance, financial services and healthcare racing to position themselves in Big Data, in-memory data clouds while others don’t seem to care?

World-class companies are targeting a new set of business problems that were hard to solve before – Modeling true risk, customer churn analysis,  flexible supply chains, loyalty pricing, recommendation engines, ad targeting, precision targeting, PoS transaction analysis, threat analysis, trade surveillance, search quality fine tuning,  and mashups  such as location + ad targeting.

To address these petascale problems an elastic/adaptive infrastructure for data warehousing and analytics capable of three things is converging:

  • ability to analyze transactional,  structured and unstructured data on a single platform
  • low-latency in-memory or Solid State Devices (SSD) for super high volume web and real-time apps
  • Scale out with low cost commodity hardware; distribute processing  and workloads

As a result,  a new BI and Analytics framework is emerging to support public and private cloud deployments.

Read more »

%d bloggers like this: