IBM’s new SaaS service on SoftLayer

IBM’s new SaaS service on SoftLayer offers data management in the cloud

 

unique - Edited

 

IBM

IBM Corp. launched Monday a range of new cloud services designed for the enterprise that are based on SoftLayer infrastructure. A year after its US$2 billion acquisition, SoftLayer has become the driving force behind IBM’s rapid acceleration to cloud leadership.

 

With big data creating demand for cloud, SoftLayer will play a key role in delivering IBM’s data and analytics portfolio to clients faster, more effectively and efficiently.

IBM’s cloud revenue went up more than 50 percent in IBM’s first quarter. For cloud delivered as a service, first-quarter annual run rate of $2.3 billion doubled year to year.

IBM’s CEO Ginni Rometty said in April that the company had continued to take actions to transform parts of the business, and to shift aggressively to its strategic growth areas, including cloud, big data analytics, social, mobile and security.

“As we move through 2014, we will begin to see the benefits from these actions,” Rometty said. Over the long term, they will position us to drive growth and higher value for our clients.”

IBM will make available via the Bluemix developer platform and IBM marketplace the Watson Engagement Advisor on SoftLayer which allows organizations to gain timely and actionable insights from big data, transforming the client experience through natural conversational interactions with system that get smarter with use.

Running on IBM’s POWER8 processor, IBM Power Systems integrated into SoftLayer’s infrastructure will handle big data, analytics and cognitive requirements in the cloud with unprecedented speed.

Watson Developer Cloud on Softlayer allows access for third party developers, entrepreneurs, academics, and system integrators looking to harness Watson’s cognitive capabilities in the products and services they bring to market.

IBM is now providing over 300 services within the IBM cloud marketplace that is based on SoftLayer. This includes data and analytics and SoftLayer offerings such as the IBM multi-enterprise Relationship Management SaaS that connects and manages shared business processes across a variety of communities, Time Series Database that connects applications to the Internet of Things and Analytics Warehouse which provides an agile platform for data warehousing and analytics.

Aspera high-speed transfer technology is now also available on SoftLayer which allows users to move large unstructured and structured data with maximum speed and security, regardless of data size, distance or network conditions.

IBM also unveiled new software defined storage-as-a-service on IBM SoftLayer, code named Elastic Storage on Cloud, to give organizations access to a fully-supported, ready-to-run storage environment, which includes SoftLayer bare metal resources and high performance data management and allows organizations to move data between their on-premise infrastructure and the cloud.

Elastic Storage on Cloud is optimized for technical computing and analytics workloads, providing more storage capability in a more cost-effective way. Organizations can now easily meet sudden spikes in storage demands without needing to purchase or manage in-house infrastructure.

With on-demand access to Elastic Storage resources, organizations working on high performance computing and analytics such as seismic data processing, credit risk management and financial analysis, weather modeling, genomics and scientific research are able to quickly adapt to changing business needs and get their products or research out of the door faster.

Elastic Storage on Cloud is available starting Tuesday. With pricing starting at $13,735 per 100 TB per month and includes software licenses, SoftLayer infrastructure and full support.

SoftLayer also expanded hourly billing for bare-metal servers bringing critical pay-as-you-go benefits of virtual server consumption to dedicated resources. Bare metal servers provide the increased performance and privacy that many enterprises desire.

IBM Cloud Modular management is a fully automated service management system to help companies to govern new cloud application environments. IBM Cloud Modular management enables companies with the choice and flexibility to pick the services they want to manage on their own or have IBM manage for them.

Jumpgate from SoftLayer will also play a key role in helping businesses build their own hybrid cloud environments. Jumpgate allows for interoperability between clouds by providing compatibility between the OpenStack API and a provider’s proprietary API.



Reasons DevOps Is Critical New Information Centers Tools -

Why DevOps Is Coming to be a Critical Consider New Information Centers -

 

slide_image_081213-bigData-1

 

Practitioner Idea: Deal with Other Groups and Discover Ways to Build Sympathy
Structure bridges in between groups will raise your understanding of the challenges at every factor in the life cycle. As a designer, attempt to put on your own in the footwears of the operations team: Exactly how will they keep an eye on and deploy your software? As an ops individual, think of the best ways to help developers acquire responses on whether their software application will certainly work in production.

 

 

Supervisor Tip: Build Rely on With Your Equivalents on Various other Teams
Building trust in between groups is the most essential point you could do, and it must be developed in time. Count on is built on kept guarantees, open communication and acting naturally even in stressful scenarios. Your groups will certainly have the ability to work better, and the partnership will signify to the organization that cross-functional collaboration is valued.
-
DevOps is a software program development technique that makes use of automation to concentrate on communication, collaboration and combination between software developers and IT procedures experts. The objective is to maximize the of a routine, effectiveness, security and maintainability of functional procedures. Analyzing this fad, Puppet Labs has launched its 2014 State of DevOps report, which includes a 9,200-respondent study. The study revealed that high-performing IT divisions not only offer a clear competitive benefit, but that respondents in the “higher performing” group reported that their organizations are two times as most likely to go beyond earnings, market share and productivity goals. The record also located that for the 2nd successive year high-performing IT organizations deploy code 30 times a lot more often with 50 percent fewer failings. With so much using on the success and failing of IT, several in the career are searching for ways to improve process in order to operate at peak levels.

 New Information Centers

 

Specialist Pointer: Make Invisible Work Visible
Tape just what you and your coworkers do to assist cross-functional cooperation. If members of the dev and ops groups interact to address a trouble in the development setting, make certain to record and recognize what made that feasible: an ops colleague taking an extra on-call shift, or an assistant ordering meals for a functioning session. These are nontrivial contributions and may be needed for effective collaboration.

New Information Centers



Hadoop 101

 

Hadoop 101: Programming MapReduce with Native Libraries, Hive, Pig, and Cascading

June 06, 2013 • PRODUCTS • By Stacey Schneider

hadoop-101-programming-basics_V02Apache Hadoop and all its flavors of distributions are the hottest technologies on the market. Its fundamentally changing how we store, use and share data. It is pushing us all forward in many ways–how we socialize with friends, how science is zeroing in on new discoveries, and how industry is becoming more efficient.

But it is a major mind shift.  I’ve had several conversations in the past two weeks with programmers and DBAs alike explaining these concepts. For those that have not yet experimented with it, they find the basic concepts of breaking apart databases and not using SQL to be equal parts confusing and interesting science. To that end, we’re going to take this conversation a little more broad and start to layout some of the primary concepts that new professionals to Hadoop can use as a primer.

To do this, examples work best. So we are going to use a basic word count program to illustrate how programming works within the MapReduce framework in Hadoop. We will explore four coding approaches using the native Hadoop library, or alternate libraries such as PigHive and Cascading so programmers can evaluate which approach works best for their needs and skills.

Basic Programming in MapReduce

In concept, the function of MapReduce is not some new method of computing. We are still dealing with data input and output. If you know basic batch processing, MapReduce is familiar ground—we collect data, perform some function on it, and put it somewhere. The difference with MapReduce is that the steps are a little different, and we perform the steps on terabytes of data across 1000s of computers in parallel.

The typical introductory program or ‘Hello World’ for Hadoop is a word count program. Word count programs or functions do a few things: 1) look at a file with words in it, 2) determine what words are contained in the file, and 3) count how many times each word shows up and potentially rank or sort the results. For example, you could run a word count function on a 200 page book about software programming to see how many times the word “code” showed up and what other words were more or less common. A word count program like this is considered to be a simple program.

The word counting problem becomes more complex when we want it to run a word count function on 100,000 books, 100 million web pages, or many terabytes of data instead of a single file. For this volume of data, we need a framework like MapReduce to help us by applying the principle of divide and conquer—MapReduce basically takes each chapter of each book, gives it to a different machine to count, and then aggregates the results on another set of machines. The MapReduce workflow for such a word count function would follow the steps as shown in the diagram below:

  1. The system takes input from a file system and splits it up across separate Map nodes
  2. The Map function or code is run and generates an output for each Map node—in the word count function, every word is listed and grouped by word per node
  3. This output represents a set of intermediate key-value pairs that are moved to Reduce nodes as input
  4. The Reduce function or code is run and generates an output for each Reduce node—in the word count example, the reduce function sums the number of times a group of words or keys occurs
  5. The system takes the outputs from each node to aggregate a final view

So, where do we start programming?

There are really a few places we might start coding, but it depends on the scope of your system. It may be that you need to have a program that places data on the file system as input or removes it; however, data can also move manually. The main area we will start programming for is the colored Map and Reduce functions in the diagram above.

Of course, we must understand more about how storage and network are used as well as how data is split up, moved, and aggregated to ensure the entire unit of work functions and performs as we expect. These topics will be saved for future posts or you can dig into them on your own for now.

Code Examples—Hadoop, Pig, Hive, and Cascading

At a high level, people use the native Hadoop libraries to achieve the greatest performance and have the most fine-grained control. Pig is somewhere between the very SQL-like, database language provided by Hive and the very Java-like programming language provided by Cascading. Below, we walk through these four approaches.

Let’s look at the 4 options.

Native Hadoop Libraries

The native libraries provide developers with the most granularity of coding. Given that all other approaches are essentially abstractions, this language offers the least overhead and best performance. Most Hadoop queries are not singular, rather they are several queries strung together. For our simplistic example with a single query, it is likely the most efficient. However, once you have more complex series of jobs with dependencies, some of the abstractions offer more developer assistance.

In the example below, we see snippets from the standard word count example in Hadoop’s documentation. There are two basic things happening. One, the Mapper looks at a data set and reads it line by line. Then, the Mapper’s StringTokenizer function splits each line into words as key value pairs—this is what generates the intermediate output. To clarify, there is a key value pair for each instance of each word in the input file. At the bottom, we can see that the reducer code has received the key value pairs, counts each instance, and writes the information to disk.

Apache Pig

Apache Pig’s programming language, referred to as Pig Latin, provides a higher level of abstraction for MapReduce programming that is similar to SQL, but it is procedural code not declarative. It can be extended with User Defined Functions (UDFs) written in Java, Python, JavaScript, Ruby, or Groovy  and includes tools for data execution, manipulation, and storage. In the example below, we can see Pig Latin used to describe the same word count application as above, but fewer lines of code are used to read, tokenize, filter, and count the data.

Apache Hive

Hive is a project that Facebook started in 2008 to make Hadoop behave more like a traditional data warehouse. Hive provides an even more SQL-like interface for MapReduce programming. In the example below, we see how Hive gets data from Hadoop Distributed File System (HDFS), creates a table for lines then does a select count on the table in a very SQL-like fashion. The lateral view applies splits, eliminates spaces, groups, and counts. Each of these commands maps to MapReduce functions covered above. Often considered the slower of the languages to do Hadoop with, this project is being actively worked on to speed it up 100x.

Cascading

Cascading is neither a scripting nor a SQL-oriented language—it is a set of .jars that define data processing APIs, integration APIs, as well as a process planner and scheduler. As an abstraction of MapReduce, it may run slower than native Hadoop because of some overhead, but most developers don’t mind because its functions help complete projects faster, with less wasted time. For example, Cascading has a fail-fast planner which prevents it from running a Cascading Flow on the cluster if all the data/field dependencies are not satisfied in the Flow. It defines components and actions, sources, and output. As data goes from source to output, you apply a transformation, and we see an example of this below where lines, words, and counts are created and written to disk.

Just the Tip of the Iceberg

For Hadoop, there are many ways to skin this cat. These four examples are considered the more classic or standard platforms for writing MapReduce programs, probably because all except for Cascading is an Apache project. However, many more exist. Even Pivotal has one with our Pivotal HD Hadoop distribution called HAWQ. HAWQ is a true SQL engine that appeals to data scientists because of the level of familiarity and the amount of flexibility it offers. Also, it is fast. HAWQ can leverage local disk, rather than HDFS, for temporarily storing intermediate result, so it is able to perform joins, sorts and OLAP operations on data well beyond the total size of memory in the cluster.

Additional reading:

 

 

 



Hadoop POWERING BIG DATA APPLICATIONS

Hadoop for the Company POWERING BIG DATA APPLICATIONS

 

01_Hadoop_full

Apache Hadoop has actually become the dominant platform for Big Data analytics in the last few years, thanks to its flexibility, integrity, scalability, and ability to match the requirements of developers, web startups, and business IT. A rapid and economic means to leverage the huge quantities of data created by new sources such as social networking sites, mobile sensing units, social media, and Internet of Ordeals devices, Hadoop has actually ended up being the favored system for storage space and analytics of huge unstructured datasets.

Originally established in 2003 by data scientists at Yahoo!, Hadoop was quickly welcomed by the open source community, in addition to consumer-facing Internet giants such as Google and Facebook. Recently, Hadoop has actually been embraced by ventures that similarly need to obtain actionable insight from Big Data created by brand-new data sources, technology innovations, cloud support services, and business opportunities. IDC has actually predicted the Hadoop software market will be worth $813 million by 2016.

Hadoop is a game changer for enterprises, transforming the economics of massive information analytics. It eliminates information silos, and lessens the need to move information in between storage space and analytics software program, giving businesses with a much more all natural sight of their clients and operations, causing quicker and a lot more effective company ideas. Its extensibility and countless combinations can power a new generation of data-aware business applications.

The software application’s “refreshingly distinct approach to information administration is transforming exactly how firms save, process, evaluate and share large data,” according to Forrester analyst Mike Gualtieri. “Forrester believes that Hadoop will end up being essential facilities for huge business.”.

For companies utilizing proprietary information solutions and personnel familiar with SQL analytics tools, transitioning to Hadoop could be difficult, in spite of its several advantages. Combination with existing infrastructure could provide a significant difficulty. To this end, Critical supplies its enterprise-grade Hadoop distribution Pivotal HD as either a standalone product or part of the Pivotal Big Data Collection.

Crucial HD builds on Hadoop’s solid foundation by including features that boost business adoption and usage of the platform. It allows business Data Lake, permitting companies to introduce their existing analytics tools to their data. Essential HD is the Foundation for business Data Lake supplying the World’s Many Advanced Real-Time Analytics System with GemFire XD, and the most extensive set of Advanced Analytical Toolsets with HAWQ, MADlib, OpenMPI, GraphLab or even Spring XD. Showcasing HAWQ, the world’s fastest SQL query engine on Hadoop, Pivotal HD speeds up information analytics houses, leverages alreadying existing skillsets, and significantly broadens Hadoop’s capacities. Crucial GemFire brings live analytics to Hadoop, allowing companies to procedure and make essential company choices instantly.

While leveraging Hadoop’s tested perks, Essential HD includes features that ease adoption, boost efficiency, and offer robust administration tools. It sustains leading information science devices such as MADlib, GraphLab (OpenMPI), and User-Defined Functions, including support for prominent languages such as R, Java, and Python. Essential HD additionally integrates with Spring season environment buildings such as Springtime XD, alleviating the advancement of data-driven applications and solutions.

Allowing companies to collect and take advantage of both organized and disorganized data kinds, Pivotal HD makes it possible for a flexible, fault-tolerant, and scalable Business Information Lake. Pivotal’s engineers, several of which were indispensable to Hadoop’s development and development, have actually developed an enterprise-grade Hadoop circulation. Find out more regarding their proceeded work on Pivotal HD on the Critical blog site.

 

Hadoop POWERING BIG DATA APPLICATIONS



Books and other resources to learn R

From Amy’s Page

12 Books and other resources to learn R

This article was originally posted on UCAnalytics. Link to full version is provided at the bottom.

1. R for Reference

r for everyoneR for Everyone: Advanced Analytics and Graphics – Jared P. Lander

YOU CANalytics Book Rating 5 Stars (5 / 5)

Jared Lander, in his book, wastes no time on basic graphic (comes pre-installed with R), but jumps directly to ggplot2 package (a much advanced and sleek graphical package). This sets the tone for this book i.e. don’t learn things you won’t use in real life applications later. I will highly recommend this book for a fast paced experience to learn R.

R in Action

R in Action - Robert Kabacoff

YOU CANalytics Book Rating 5 Stars (5 / 5)

Here is another exceptional book to start learning R on your own. I must say Robest Kabacoff, the author of this book, has done a phenomenal job with this book. The organization of the book is immaculate and the presentation is friendly. I will highly recommend either this book or R for Everyone to start your journey to learn R.

The r bookThe R Book Michael J. Crawley

YOU CANalytics Book Rating 4.8 Stars (4.8 / 5)

With close to a thousand pages and vast coverage, ‘The R Book’ could be called the Bible for R.  This book starts with simple concepts in R and gradually move to highly advanced topics. The breadth of the book can be estimated through the presence of dedicated chapters on topics as diverse as data-frames, graphics, Bayesian statistics, and survival analysis. Essentially this is a must have reference book for any wannabe R programmer. But for a beginner the thickness of the book could be intimidating.

2. R with Theory

R StatsAn Introduction to Statistical Learning: with Applications in R - Gareth James et al.

YOU CANalytics Book Rating 5 Stars (5 / 5)

This book is a high quality statistical text with R as the software of choice. If you want to be comfortable with fundamental concepts in parallel with learning R, then this is the book for you. Having said this, you will love this book even if you have studied advanced statistics. The book also covers some advanced machine learning concepts such as support machine learning (SVM) and regularization. A great book by all means.

machine learning with RMachine Learning with R Brett Lantz

YOU CANalytics Book Rating 4.5 Stars (4.5 / 5)

If you want to learn R from the machine learning perspective, then this is the book for you. Some people take a lot of interest in fine demarcation between statistics and machine learning; however for me there is too much overlap between the topics. I have given up on the distinction as it makes no difference from the applications perspective. The book introduces R-Weka package – Weka is another open source software used extensively in academic research.

3. R with Applications

 r and data miningR and Data Mining: Examples and Case Studies – Yanchang Zhao

YOU CANalytics Book Rating 4.3 Stars (4.3 / 5)

There are other books that use case studies approach for readers to learn R. I like this book because of the interesting topics this book covers including text mining, social network analysis and time series modeling. Having said this, the author could have put in some effort on formatting of this book which is pure ugly. At times you will feel you are reading a masters level project report while skimming through the book. However, once you get over this aspect the content is really good to learn R.

R rattleData Mining with Rattle and R: The Art of Excavating Data for Knowledge Discovery (Use R!) - Graham Williams

YOU CANalytics Book Rating 4.2 Stars (4.2 / 5)

Rattle is no SAS E-miner or SPSS modeler (both commercial GUI based data mining tools). However trust me, apart from a few minor issues Rattle is not at all bad. The book is a great reference to Rattle (a GUI add on package for R to mine data) for data mining. I really hope they keep working on Rattle to make it better as it has a lot of potential.

 4. R Graphics and Programming

GGplot2ggplot2: Elegant Graphics for Data Analysis (Use R!) – Hadley Wickham

YOU CANalytics Book Rating 4 Stars (4 / 5)

‘ggplot 2′ is an exceptional package to create wonderful graphics on R. It is much better than the base graphics that comes pre-installed with R, so I would recommend you start directly with ggplot 2 without wasting your time on base graphics. ‘R for everyone’, the first book we discussed, has a good introduction to ggplot. However, if you want to get to further depths of ggplot-2 then this is the book for you.

Though I prefer ggplot 2, Lattice is another package at par with ggplot 2. A good book to start with Lattice is ‘Lattice: Multivariate Data Visualization with R (Use R!) by Deepayan Sarkar’.

Read full list.

Additional links



Emerging Storage

Emerging Storage, VMware And Pivotal Drive EMC’s Q2 Earnings

Trefis TeamTrefis Team , Contributor

EMC announced its second quarter earnings on July 23, reporting a 5% year-on-year growth in net revenues to $5.9 billion. The company’s services revenues rose by almost 9% over the prior year quarter to $2.6 billion while its product revenues stayed flat at about $3.3 billion. Much of the growth was driven by VMware (+17%), Pivotal (+28%) and RSA Security (7%) while core information storage revenues remained nearly flat at $4 billion.

EMC’s market share in external storage systemsdeclined from 30.2% in Q1 2013 to 29.1% in the first quarter of 2014, according to a recent report by IDC. This was the first quarter since 2008 in which EMC’s market share declined year-over-year. EMC’s revenues from external storage systems in Q1 declined by almost 9% while the industry-wide decline was about 5%. However, EMC’s revenues in Q2 grew higher than the industry average, due to which the company gained share in the market.

Weakness in its core business led to market speculation prior to earnings about EMC spinning off VMware and Pivotal. The Wall Street Journal reported that external pressure from EMC’s large institutional investors could lead the company to spin off some of the fastest-growing businesses within the company such as VMware and Pivotal. However, EMC’s management refuted the speculation and stood by its “federation” business model, wherein some of the acquired companies operate as separate entities while they still collaborate on products for large clients. The company believes that its current setup is ideal for growth for both EMC and the acquired companies.

We have a $30 price estimate for EMC, which is roughly in line with the current market price.

See our full analysis for EMC’s stock

Key Areas Of Growth:

Emerging Storage

EMC’s Emerging Storage products such as XtremIO, Isilon, Atmos and VPLEX were largely responsible for the growth in hardware sales during the past few quarters. The Emerging Storage sub-segment grew by 51% year-over-year (y-o-y) in Q1 2014, which the company attributed to a strong customer response for these products. Despite strong y-o-y growth, the revenues generated by emerging storage solutions stayed flat over Q1. The company attributed this to intermittent demand for some large individual orders. The company expects strong growth for emerging storage solutions on the back of solid demand for software-defined storage, Big Data analytics, cloud storage and flash arrays in the coming quarters.

VMware

VMware’s revenues grew by 17% y-o-y to $1.45 billion for the June quarter with growth coming from both product licenses revenues (+16%) and services revenues (+18%). However, VMware’s gross margin within EMC declined by 180 basis points over the prior year quarter to 87.8%. The decline in VMware’s margins led EMC’s overall gross margin to decline by 40 basis points to 62.1%. EMC has invested over $6 billion in acquisitions and internal developments since 2012, of which a significant portion was attributable to VMware related products. These acquisitions included software-defined networking leader Nicira and mobility management leader AirWatch. All the acquisitions will show up as losses on the income statement this year. However, management believes that margins are likely to improve in the future quarters (read: SDN, Hybrid Clouds And AirWatch Help VMware Post Strong Q2 Results).

Pivotal

Pivotal is among the fastest-growing divisions within the company, with 40% y-o-y growth in the first quarter. Although the growth rate was lower than the previous quarter at 29%, the number of orders rose by over 50%. Additionally, Pivotal’s margins expanded from the March quarter. Pivotal’s platform consists of new generation data fabrics, application fabrics and a cloud-independent Platform-as-a-Service to support cloud computing and Big Data applications, which have started gaining traction among customers. Management mentioned that some of Pivotal’s growth may not be immediately realized in the numbers since it is building out a subscription-based revenue stream, which is likely to be beneficial in the long run.

RSA Security

RSA Security, EMC’s information security division, grew by over 11% to almost $1 billion in 2013. The growth continued in the first half of 2014, but the rate of growth was lower than 2013 at about 6% y-o-y. The information security industry is growing, with customers allocating more of their security budgets to intelligence-driven analytics, where RSA Information Security excels, rather than static prevention.

 



Microsoft Tries Appliances to Build Clouds

Microsoft Tries Appliances to Build Clouds

 

unique - Edited

 

The Surface tablet, Xbox One gaming console and the plethora of peripherals are not the only pieces of hardware in the Microsoft stable. The company is making plans to launch new storage and hybrid Azure cloud devices to expand their capabilities and market share.

According to ZDNet, Microsoft is ramping up a storage appliance, aptly named Azure StorSimple 8000, which connects to the Azure cloud and is based on its 2012 acquisition StorSimple. The appliance allows users to store data that’s most used in the local storage while assigning and indexing lesser used files in the cloud.

 

Microsoft Tries Appliances to Build Clouds

The Azure StorSimple appliance, slated for release in August, will connect Azure StorSimple Manager, which will provide users with simplified access and management to locally and remotely stored files.

Microsoft will continue to sell and support the StorSimple 5000 and 7000 series appliances, which also connect to the Azure cloud but do not integrate with Azure StorSimple Manager.

Unlike other appliances in the software giant’s fold, Microsoft is looking to channel partners – specifically systems integrators – to sell and deploy the StorSimple devices in enterprise and midmarket accounts for disaster recovery, primary and secondary storage, and platforms for application management.

Separate from StorSimple, Microsoft is reportedly gearing up for another run at the Azure in a box strategy. Plans for an Azure private cloud appliance, reportedly being developed under the code name “San Diego,” will provide enterprises with on-premises cloud, network and storage resources. Essentially, Microsoft is attempting to provide enterprises with the same cloud-based Azure functionality in their own data center.

Since 2010, Microsoft has attempted to release Azure appliances. The initial cuts were announced with OEM partners such as Hewlett-Packard, Dell and Fujitsu. Only Fujitsu ended up releasing a commercial product. ZDNet reports the original program puttered out in late 2012 even though no official announcement was made.

The new Azure appliance versions will reportedly come from and be supported by Microsoft, and sold through its systems integrator channel.

While pushing deeper into hardware to support its cloud strategy, Microsoft insists plenty of room exists in the market for its appliances and services as well as similar offerings by its traditional OEM partners. Nevertheless, the expanding hardware portfolio does provide further evidence that Microsoft is increasingly a competitor to companies such as Hewlett-Packard, Dell, Lenovo, EMC and IBM.

And, unlike its Surface tablets, Microsoft seems to have no issue in selling and support hardware devices through its B2B channels.

Related Articles:

 

 



CLOUD COMPUTING

IBM Provides Cloud Services to California State Agencies
Next Story

PREVIOUS STORY

Dropbox for Business Beefs Up Security

THIS STORY

IBM Provides Cloud Services to California State Agencies

YOU ARE HERE:   HOME arrow CLOUD COMPUTING arrow THIS STORY
24/7/365 Network Uptime!
NEWS OPS

By Barry Levine. Updated July 24, 2014 1:57PM 

SHARE

 

  

 

 

FILED UNDER

 

 

There’s a big, new cloud Relevant Products/Services coming to California, powered by IBM. The tech giant said Thursday it will be supplying cloud services for more than 400 state and local agencies.The service, called CalCloud, is the first of its kind in the U.S. at a state level. It will allow data Relevant Products/Services and programs to be stored and made available to all participating agencies, which will only pay for thecomputing Relevant Products/Services workload they actually use.

The cloud services need to comply with a range of requirements from such federal agencies as the IRS and the Social Security Administration, not to mention HIPAA (the Healthcare Insurance Portability and Accountability Act) and the security Relevant Products/Services standards of the National Institute of Standards.

‘Important Step’

Through CalCloud, agencies can now share a common pool of computing resources that the California Department of Technology said would be more efficient than the current setup. Nearly two dozen departments have requested IT Relevant Products/Services services via CalCloud.

Marybel Batjer, secretary of the Government Operations Agency, said in a statement that CalCloud “is an important step towards providing faster and more cost-effective IT services to California state departments and ultimately to the citizens of California.”

IBM will be supplying and managing the infrastructure Relevant Products/Services of CalCloud, and the state’s Department of Technology will take care of the other aspects. Big Blue also said it will work with the state to transfer knowledge and best practices relating to security and systems integration with the department.

As with other cloud services, this pay-for-use arrangement will enable the state agencies to scale Relevant Products/Services up or down the resources they need for variable workloads. It also provides immediate and round-the-clock access to such configurable resources as compute, storage Relevant Products/Services,network Relevant Products/Services and disaster recovery services.

High Performance, Watson

IBM has been rapidly building up its cloud services, and creating more than a hundred software Relevant Products/Services-as-a-service solutions for specific industry needs. The CalCloud project will likely become the basis for similar offerings to other states, as well as to other governments worldwide.

In other IBM news, the company said Wednesday that it will be making high performance computing more accessible through the cloud to clients that need additional capabilities for big data and other computationally intensive workloads.

Very high data throughput speeds will be enabled from IBM’s SoftLayer company, using InfiniBand networking technology to connect SoftLayer bare metal servers. InfiniBand is a networking architecture that delivers up to 56 Gbps.

SoftLayer CEO Lance Crosby said in a statement that “our InfiniBand support is helping to push the technological envelope while redefining how cloud computing can be used to solve complex business Relevant Products/Servicesissues.”

Also on Wednesday, IBM and financial services firm USAA announced that IBM Watson intelligence Relevant Products/Services-as-a-service technology will now be employed for USAA members. It is the first commercial use of Watson in a consumer-facing role. Watson will be used in a pilot project to help military men and women transition from military to civilian life.

 

CLOUD COMPUTING

 



Box Raises More Money

Box Raises More Money, Cloud Questions

 

 

01_Hadoop_full|

Cloud storage and content management company Box is fast becoming a focal point of the cloud computing era. While other cloud ventures such as Salesforce.com and NetSuite have become productive service providers, Box plods along with high cash-burn rates and an indeterminate exit strategy.

Yesterday, Box announced it raised another $150 million in fresh venture funding, adding to its $80 million in cash reserves and bringing its total investment backing to $450 million. The company is now worth, by some estimates, $2.4 billion, even though its revenues are somewhere around $200 million.

What makes Box an interesting study is its expenses. Until recently, the company spent much more on marketing and communications than anything else in its operations.According to Forbes, Box spent $171 million on sales and marketing in 2013 – nearly a third more than its total revenue. The company says its business model, which relies on adding accounts and subscribers, requires heavy investments in sales, marketing and infrastructure.

Box’s high expense has long been a sour spot. The company is showing signs of reining in expenses and expanding sales faster than spending. In the first quarter of 2014, marketing spending was still up 40 percent over the same quarter in 2013, but the sales doubled.

The challenge Box faces is the same as for many cloud service providers. Cloud revenues compound over time, and deferred revenue counts more than point-in-time sales. Box is counting nearly $90 million in deferred revenue from the first quarter – double over the same period in 2013 – and it’s added more than 5,000 paid corporate accounts. All cloud service providers see weakness in revenues while building their base. If they’re manage the transition period, they will hit an inflection point where compounding recurring revenue will exceed and accelerate past expenses.

Another company experiencing this phenomenon is Adobe. In 2013, Adobe abandoned its traditional software licensing model to embrace cloud subscriptions. Initially, Adobe revenues and profits plummeted to the point where alarms were going off on Wall Street and among partners and users. The precipitous dip made many question whether Adobe could whether the financial transition.

Today, Adobe is profitable and growing. Its compound recurring revenue – based on nearly 2.2 million paid users – is generating positive cash flow. And the company expects to exceed 3.3 million paid subscribers before the end of 2014.

Box is a bit different than many cloud providers, as it supports millions more unpaid users than paid subscribers. This puts a burden on the company to build around that broader base with infrastructure and support, which adds expenses. However, Box may prove the broader base is worth the expense, as they contribute to the conversion of net-new paid accounts.

The ultimate lesson Box may prove is that marketing makes a difference in building cloud brands. If Box turns the corner, goes public and becomes another cloud powerhouse, it will change the rules on what it takes to build a successful cloud-era business: loud and persistent marketing and communications.

Related Articles:

 

 



Big Data: The 5 Vs Every person Should Know

Big Data: The 5 Vs Every person Should Know that all are essential

 

Big Data is a huge point. It will transform our globe entirely and is not a passing craze that will certainly disappear. To know the sensation that allows data, it is usually explained utilizing 5 Vs: Volume, Velocity, Range, Veracity and Value

I assumed it may be worth simply restating what these 5 Vs are, in simple and easy language:.

Quantity refers to the vast amounts of data created every secondly. Just think about all the emails, twitter messages, photos, video clips, sensing unit data etc. we create and discuss every second. We are not speaking Terabytes yet Zettabytes or Brontobytes. On Facebook alone we send out 10 billion messages daily, click the “like’ button 4.5 billion times and upload 350 million new images every day. If we take all the information generated in the world between the start of time and 2008, the exact same amount of data will certainly soon be generated every min! This increasingly makes data sets too big to shop and assess utilizing standard database technology. With large information technology we could now store and use these data sets with the aid of distributed systems, where parts of the data is saved in various locations and combined by software program.

Speed describes the speed at which brand-new information is generated and the speed at which data moves around. Merely think of social networks messages going viral in seconds, the rate at which bank card transactions are looked for fraudulent activities, or the nanoseconds it takes trading systems to analyze social networks networks to pick up signals that set off choices to acquire or sell shares. Large data modern technology enables us now to evaluate the data while it is being produced, without ever putting it into data sources.

B2B

Selection refers to the various kinds of information we could now utilize. In the past we concentrated on structured information that neatly matches tables or relational databases, such as financial information (e.g. sales by item or area). Actually, 80 % of the world’s data is now disorganized, and therefore can’t easily be put into tables (consider pictures, video sequences or social media sites updates). With huge information technology we could now utilize differed sorts of information (structured and unstructured) including messages, social networks talks, photos, sensing unit data, video clip or voice recordings and bring them along with even more standard, organized information.

Accuracy describes the messiness or trustworthiness of the information. With several kinds of big data, top quality and precision are less manageable (merely think about Twitter posts with hash tags, abbreviations, typos and colloquial speech and also the reliability and precision of content) however large data and analytics innovation now enables us to work with these type of information. The quantities usually offset the absence of high quality or accuracy.

Worth: Then there is another V to think about when checking out Big Data: Worth! It is all well and great having accessibility to huge data however unless we could turn it into value it is pointless. So you could securely say that ‘worth’ is one of the most vital V of Big Data. It is very important that businesses make a business situation for any sort of try to collect and leverage large information. It is so simple to come under the talk catch and plunge into large data initiatives without a clear understanding of costs and benefits.

I have assembled this little presentation for you to make use of when talking about or discussing the 5 Vs of big data:.

 

Big Data: The 5 Vs Every person Should Know that all are essential