Cloud Hosting

Microsoft brings Office to Android phones, but overlooks tablets

Posted on May 12, 2014 at 10:39 am

Microsoft has brought its subscription-based Office 365 service to Android phones, expanding the firm’s on-the-move document-editing experience to a wider range of users and devices. The app, however, is not available for Android tablets.

The service, which is already available on Windows Phone 8 and iOS, brings Word, Excel and PowerPoint to the hands of mobile users. The app is currently only available to users in the US, but Microsoft said that it will rolling it out to other countries “over the next several weeks”.

The Office 365 app is free to download on Google’s Play store but users will be asked to log into a Microsoft account which has a subscription to Office 365. Office 365 costs £7.99 per month or £79.99 per year and includes access to the full office suite as well as storage on Microsoft’s SkyDrive service.

The lack of a tablet option has left larger screen users disappointed. One reviewer wrote: “Lack of tablet support definitely limits the use case for me. May have limited use on the phone at best. Tablet support is needed for this to truly be usable.”

Microsoft recommends tablet users to instead use the Office 365 web application for document editing on tablets. This matches up with the firm’s iPad offering which, while usable on the iPad, is not recommended for use on tablets. There is no word from Microsoft about specific tablet applications.

Upon logging in, users will be greeted with a recent documents tab, which keeps regularly-accessed documents synced to mobile devices. In addition, documents stored in SkyDrive will open at the position in which the user left it, allowing for seamless editing in various locations.

The Office iOS app launched in June to much fanfare, but did not prove to be a hit with users. The app currently averages just two-and-a-half stars in the iTunes store.

Posted in Cloud Hosting

NASA cloud computing use blasted for security and management failings

Posted on May 10, 2014 at 1:38 pm

Nasa’s cloud computing strategy came under fire from US authorities, with concerns raised about major security failings and a lack of communication and organisation.

The report from the US Office of Inspector General (OIG) stated that Nasa’s cloud services “failed to meet key IT security requirements”. It went on to say that of five Nasa contracts for acquiring cloud services, “none came close to meeting recommended best practices for ensuring data security.”

Nasa currently spends $1.5bn annually on IT services, only $10m of which is based in the cloud. However, the agency itself predicts that 75 percent of its future IT programmes will be in the cloud, making the findings of the Office of the Inspector General even more of a cause for concern.

The report went on, listing numerous problems with the way in which the agency failed to meet federal IT security requirements. “We found that the cloud service used to deliver internet content for more than 100 NASA internal and public-facing websites had been operating for more than two years without written authorisation or system security or contingency plans,” it said.

The audit also found that required annual tests of security controls had not been performed, which it said “could result in a serious disruption to Nasa operations”.

Nasa chief executive Larry Sweet joined the agency in June and seemingly has a mountain to climb to reorder his department’s operations, with many decisions seemingly made with his predecessor completely in the dark. “Several Nasa Centers moved Agency systems and data into public clouds without the knowledge or consent of the Agency’s Office of the Chief Information Officer,” the report said.

The reported noted that Sweet agreed with the findings and, with the availability of funds, will work “to improve Nasa’s IT governance and risk-management practices”.

Nasa has long been a supporter of cloud computing projects, lending its backing to the OpenStack open-source cloud project in 2010.

Posted in Cloud Hosting

Sophos to bring threat management to Amazon Web Services

Posted on May 8, 2014 at 5:32 pm

Security firm Sophos has launched a new service which will allow users to better run run the company’s Unified Threat Management (UTM) platform through the Amazon Web Services (AWS) Elastic Compute (EC2) computing cloud service.

The company said that it would be adding an hourly licence option to its threat management service on the Amazon Market. The option will be available for users to purchase on the AWS Marketplace.

Security services have long been a feature on Amazon’s AWS Market since the company launched the feature in 2012. The market allows third-party vendors to integrate their products with AWS virtual machine instances.

Sophos believes that the new pricing model for the service will allow users to retain security on their servers when running AWS instances for short term projects or relying on the cloud platform’s elasticity to help scale with customer demand during peak operating times.

As a long-standing security provider, we know about the many benefits that Amazon Web Services provides, especially to SMBs that have adopted the cloud,” said Sophos senior product manager Angelo Comazzetto.

We pride ourselves on developing complete security offerings that are simple to use, and with this offering, companies can better defend their cloud security resources with layers of security provided by Sophos UTM.”

The company said that the hourly fees will depend on the pricing and region of the AWS instance. Listed prices range from $.02 for a Standard Micro system to $3.10 for a High I/O 4XL EC2 instance.

Posted in Cloud Hosting

Oracle ends partner legal spat as it extends Database 12c management tools

Posted on May 6, 2014 at 2:18 pm

Oracle has ended a legal spat with former partner CedarCrestone, in a case that went to court earlier this year.

The case started in September 2012, and saw Oracle allege last year that the firm had stolen intellectual property relating to updates for tax and regulatory software owned by Oracle.

CedarCrestone strongly denied these accusations and said Oracle had engaged in an “unlawful and systematic attack” against third-party support firms.

However, the dispute has now been settled. In a terse statement on Oracle’s website the firm states: “Oracle and CedarCrestone, Inc. announce that they have amicably resolved the litigation between them. The terms of the settlement are confidential.”

The case has echoes of a similar legal spat between Oracle and SAP regarding the theft of code by the German firm’s former partner TomorrowNow.

As well as ending the legal spat, Oracle has also been busy updating its products, with an upgrade to its Database 12c platform by offering wider management support with its Oracle Enterprise Manager platform.

The Database 12c service was announced at the start of July and offers a multi-tenant architecture within the cloud, which the firm said will be key for transitioning the platform to hosted services.

By adding the Enterprise Manager platform the firm said it could further support customers using the service by providing greater IT management.

This includes the ability to “consolidate, clone and manage many databases as one,” and improve IT productivity by reducing the time it takes to perform administrative tasks as well as providing the ability to identify and resolve issues with diagnostics and analysis capabilities.

Sushil Kumar, vice president of Product Strategy and Business Development at Oracle, said that adding the management tools would help customers better mange the platform as its use grows.

“As enterprises continue to implement private clouds, IT management is becoming increasingly complex and costly. Oracle Enterprise Manager 12c is being used by organisations around the world for its broad set of cloud-enabling and management capabilities,” he said.

“By extending Oracle Enterprise Manager 12c to enable managing ‘many as one’, with the new database release, Oracle is making it even easier for customers to significantly reduce IT management costs, avoid critical issues and outages and free up resources for other business-critical tasks.”

Posted in Cloud Hosting

IBM teams up with Pivotal for Cloud Foundry push

Posted on May 4, 2014 at 5:32 pm

IBM and Pivotal have signed on to advance the Cloud Foundry platform.

The companies said that they would join the effort to build an open-source cloud platform which can be adopted by customers for public and private cloud computing platforms.

IBM said that the effort would allow customers to produce cloud computing deployments without the risk of vendor lock-in, keeping options open.

Cloud Foundry’s potential to transform business is vast, and steps like the one taken today help open the ecosystem up for greater client innovation,” said IBM next generation platforms general manager Daniel Sabbah.

IBM will incorporate Cloud Foundry into its open cloud architecture, and put its full support behind Cloud Foundry as an open and collaborative platform for cloud application development, as it has done historically for key technologies such as Linux and OpenStack.”

Launched int 2011 by VMware, Cloud Foundry seeks to provide businesses with a common platform for both public and private cloud networks.

For its part, IBM said that it would be providing its WebSphere platform to Cloud Foundry, including a preview version of the Application Server Liberty Core.

We believe that the Cloud Foundry platform has the potential to become an extraordinary asset that many players can leverage in an open way to enable a new generation of applications for the cloud,” said Pivotal chief executive Paul Maritz.

IBM’s considerable investment in Cloud Foundry is already producing great results with application-centric cloud offerings such as making IBM WebSphere Liberty available on Cloud Foundry. We look forward to growing and expanding an open Cloud Foundry community together with IBM.”

Posted in Cloud Hosting

Intel unveils ambitious project to redefine the data centre

Posted on May 2, 2014 at 4:52 pm

SAN FRANCISCO: Intel has outlined its vision to reshape the data centre with new approaches for compute, storage and network technologies to make data centres more flexible and cost effective, measures that will be needed to meet looming challenges in data volumes and power consumption.

At its data centre event in San Francisco, Intel outlined its strategy, which amounts to creating a kind of reference architecture for data centre operators to follow. It comprises technologies for virtualising the network, making storage smarter, and re-architecting servers at the rack level to deliver a pool of resources that can better meet the requirements of applications.

These changes are needed in order to meet the changing requirements of data centres, driven by factors such as the boom in mobile devices and the success of services such as social media, according to Intel’s senior vice president of the data centre and connected systems group, Diane Bryant.

“If you look at where we are now, today’s infrastructure is strained. It can take weeks to reconfigure the network to support new processes. At the same time, we’ve moved from the traditional structured enterprise data to a world of unstructured data,” she said.

Intel’s solution is to create a blueprint for the software-defined data centre, using automation to enable it to adapt to changing requirements.

Perhaps the most radical part of the vision is Intel’s Rack Scale Architecture (RSA) strategy, which “breaks down the artificial boundary of the server” in order to turn racks into pools of compute, storage and memory that can be used to provide an application with the optimum resources it requires, Bryant said.

Jason Waxman, general manager of Intel’s Cloud Infrastructure group, showed off two server “tray” designs that are a step on the road to delivering this vision, he claimed. One was filled with multiple Atom nodes, with a network mezzanine card at the rear that provides a switched fabric right in the tray, with silicon photonics interconnects to link each tray to a top-of-rack switch.

“Ideally, you want the rack to be completely modular, so you can upgrade each of the subsystems as you require, without having to rip out the whole server,” he said.

The other parts of the data centre blueprint involve virtualising the network, using software-defined networking (SDN) and network function virtualisation (NFV), the latter of which sees network functions such as a firewall or VPN delivered using virtual appliances running on standard servers.

On the storage side, Intel sees a growing role for SSD storage, perhaps integrated into the rack, while less frequently used data is relegated to low-cost disk storage in Atom-based storage nodes.

Intel stressed that its approach was standards-based, saying that the orchestration and management tools to deliver the software defined network vision would be delivered by third parties, such as the OpenStack cloud framework.

However, Intel pushed home the advantages of its x86 architecture chips, in the vast ecosystems of operating systems, applications and services that have built up around it.

“Software consistency is important,” said Waxman. “With other architectures, it’s not just about porting apps, it’s about the supporting database and the middleware,” he added.

Posted in Cloud Hosting

Google extends cloud printing service to Windows

Posted on April 30, 2014 at 1:54 pm

Google has rolled out a number of updates to its Cloud Print product, most notably bringing the service to Windows machines.

Previously only available on Google’s laptop operating system Chrome, Cloud Print allows users to print from any device to any cloud printer to which they have access. The service’s Windows offering is twofold, with tools for consumers and IT managers. The consumer software installs a virtual printer on a user’s machine, which then allows them to print from any Google Cloud printer.

The second is Google Cloud Print Service, which enables system administrators to connect existing printers to Cloud Print. Running in the background on machines using Windows XP, Vista or 7, the software is still in beta as Google continues to develop the service.

Google has also brought the offering to Android, with an app in the Google Play Store giving mobile devices access to Cloud printers.

In addition users with a unique URL can now share printers, and administrators using this function can set how many pages a particular user is allowed to print.

Cloud Print was first introduced in 2011 with a simple browser plug-in for Chrome, but it now works with cloud-ready printers, which connect directly to the internet without needing a PC to be connected and switched on. 

Elsewhere, Google has seen its staple advertising business take a slight hit, as the amount it receives from advertisers for its services per click has dropped by six percent. Google’s share prices fell as a result despite the search giant making a profit of $3.23bn.

Posted in Cloud Hosting

IBM System z mainframe update: zEnterprise BC12, OpenStack integration

Posted on April 28, 2014 at 8:46 am

IBM has unveiled a wide-ranging update to its System z mainframe line, including a new zEnterprise business class BC12 system designed to appeal to firms of all sizes, a beefed-up Enterprise Linux Server, and integration with OpenStack clouds.

Announced today, the updates to System z represent a move by IBM to keep its mainframe platform relevant in the modern world, with performance enhancements and new capabilities targeting specific customer segments such as government, banking and healthcare.

In addition to the zEnterprise BC12 (zBC12), which is intended to bring capabilities found in the high-end zEnterprise EC12 available to a broader market, IBM is releasing new versions of the z/OS operating system and z/VM virtualisation platforms across System z, along with numerous other enhancements.

In fact, IBM tries to discourage use of the word mainframe, System z director Kelly Ryan told V3, because of the image it conjures up of the massive room-filling systems of the past. In contrast, the zEnterprise BC12 unveiled today fits in a single cabinet, but is powerful enough to replace a whole rack full of standard servers, with much greater availability and reliability.

The zBC12 is effectively the follow-on from the zEnterprise 114, which launched in 2011, Ryan said. It enables customers to start small and scale up as required, with an upgrade path all the way up to zEC12. Pricing for the zBC12 starts at about £40,000.

This means that many customers will use the zBC12 as their entry point to using System z, while others are likely to use it as a regional server or as a failover system for the most critical workloads running on a larger zEnterprise box.

“It’s a bigger business class with up to 13 total cores compared with 10 before, so you can run a lot more Linux workloads or z/OS workloads, but it’s a single frame that’s completely air cooled,” she said.

Compared with the z114, the new box boasts a 36 percent performance improvement per core thanks to faster 4.2GHz processor chips, with memory capacity doubled to a maximum 512GB.

With the new z/OS 2.1 release, IBM has added scale and data-serving enhancements, plus an improved z/OS Management Facility designed to make it easier for new clients to adopt the platform.

Meanwhile, the z/VM 6.3 release adds a significant new capability in the form of integration with the OpenStack cloud computing framework. This will allow customers with OpenStack-based infrastructure to provision resources on a System z through the cloud orchestration and management layer for the first time, according to IBM.

IBM is publishing the APIs to enable this, Ryan said, allowing a zBC12 or zEC12 to integrate into an overall cloud architecture.

“Whatever cloud computing layer the client is running, whatever tools are pushing down on OpenStack, they can now push down on to z/VM and do the provisioning through it. You can envision a picture where you have your System z pieces, your PowerVM pieces, some VMware pieces, anything that ties up to OpenStack, available in a consistent manner,” she explained.

The z/VM 6.3 update also adds enhancements for performance and scale, increasing real memory support up to 1TB, for example.

IBM’s updated Enterprise Linux Server (ELS) enables customers to run up to 40 virtual servers per core, equating to a maximum 520 in a single footprint on the zBC12 or thousands on the zEC12. This enables customers to consolidate hundreds of workloads onto a single system.

A dedicated analytics solution based on ELS is also new, providing a dedicated Linux environment for analytics that is competitively priced, according to IBM.

IBM is also previewing new versions of its Information Management System (IMS) and DB2 applications, with IMS 13 set to ship to customers participating in the Quality Partnership Programme (QPP), while DB2 has already shipped to a select group of clients in a closed Early Support Programme (ESP).

New zEnteprise solutions for specific industries comprise the IBM Signature Solution, aimed at fraud detection for government agencies; IBM Smarter Analytics for Banking; and an updated IBM Cúram Solution for care management in healthcare.

While many have predicted the decline of the mainframe, System z is actually one of IBM’s growth areas, showing a revenue increase of 10 percent in IBM’s most recent financial figures.

“We’ve really seen a turning where clients are now coming in and asking us what new workload can be run on System z. Clients are starting to understand and see the benefits of the economies of scale in running an integrated platform,” said Ryan.

Posted in Cloud Hosting

SAP co-chief executive to step aside to leave sole company head

Posted on April 26, 2014 at 11:39 am

SAP co-chief executive Jim Hagemann Snabe has announced his intention to step down from the role, leaving partner Bill McDermott to lead the company forward.

Since 2010 the firm has been led by Hagemann Snabe and McDermott, but Hagemann Snabe said he wanted to spend more time with his family, and so has taken a less hands-on role on the company’s board.

“After more than 20 years with SAP, I have decided that it is time for me to begin the next phase of my career, closer to my family,” said Hagemann Snabe.

McDermott said he was pleased to retain the advice of his partner, but ready to take on the challenge of leading the company forward on his own.

“As co-chief executives, we have a proven track record of making bold decisions that set SAP and our customers up for value and growth,” said McDermott.

“The proposed setup, with Jim joining the Supervisory Board, builds on the strength of our partnership and personal friendship, and will make SAP an even stronger company as we accelerate the transformation of the industry.”

SAP’s setup of having two chief executives is a rare example in the business world, made even rarer by the fact it proved such a successful partnership, given the failings at BlackBerry during their co-chief executive era.

Their work at the company has seen a huge push towards cloud computing and business intelligence tools, such as the Hana platform, although rivals have claimed the firm was slow to embrace the cloud and is losing customers as a result.

SAP’s customers said they would be sad to see Hagemann Snabe go, but were glad he would remain involved in the company, as Philip Adams, chairman of the UK and Ireland SAP User Group, explained.

“Although it’s a shame to see Jim will be leaving the role of co-CEO, we are pleased he’ll be moving onto the Supervisory Board as he’s been a great customer advocate and has had huge passion for making sure SAP delivers the best possible products to us,” said Adam.

“McDermott and Hagemann Snabe have done a good job and initiated an era of better communication with customers.”

Posted in Cloud Hosting

OpenStack pegs itself as the operating system for the data centre as it celebrates third birthday

Posted on April 24, 2014 at 10:04 am

The OpenStack cloud computing framework is marking its third anniversary with plans to become the data centre operating system of the future, as it celebrates the huge progress already made so far.

It is just three years since hosting firm Rackspace and Nasa launched the joint open-source cloud project which went on to become OpenStack. Since then, the OpenStack Design Summit & Conference has grown from just 75 attendees at the first event, to over 3,000 earlier this year.

In the same period, the OpenStack code has seen at least seven releases, and picked up major IT industry players such as Cisco, Red Hat, IBM and HP as backers. Developers from over 200 different companies across more than 120 countries contributed code to the last release, making OpenStack a truly global project.

Rackspace vice president of technology Nigel Beighton told V3 that OpenStack’s success can be attributed to their decision to completely open up the platform to anyone wishing to participate, as well as a keen focus on meeting the requirements of end users.

“When we first started OpenStack, there were 75 of us in the room at the first summit, and it was a bit of a geek fest. What’s been a subtle but important change is that the last summit was primarily a user conference, and what we do a lot more now is explain just how to use the technology,” he said.

However, another factor in its favour is that there are not many other options available for organisations or service providers wanting to implement infrastructure as a service (IaaS), Beighton admits.

One of OpenStack’s big early challenges was credibility, as cloud is a complex issue and a major infrastructure investment for organisations to make. Getting the backing of firms like IBM was key, according to Beighton.

“One of the best things the OpenStack community did was to really get on board many of the big long-term traditional IT providers like IBM, HP, Cisco and Juniper. We managed to do this by setting up OpenStack properly: we handed over all the intellectual property relating to it, we set up a proper board and made it a full open-source project and retained nothing,” he said.

This is in contrast to some other open source projects where the intellectual property is retained by one company who then use it to develop a private paid-for version, Beighton added.

Indeed, fears that Rackspace would have too much influence over OpenStack led to the creation of the OpenStack Foundation, which took over the management of the platform last year.

Going forward, the challenges for the OpenStack community are to build and maintain interoperability across every cloud that uses the platform, as the code continues to develop.

“The user community cares a great deal about open standards. People don’t want to be locked in. If they can’t move their applications between Rackspace, HP, IBM or Piston, then it suddenly weakens the whole proposition for all of us,” Beighton said.

Going forward, Rackspace sees hybrid cloud as the future, and is positioning OpenStack as the “operating system of the data centre”, with a key focus on the links between the private and public cloud infrastructure.

Rackspace and OpenStack are looking to address these with three key developments, according to Beighton. The first is software-defined networking, which is being implemented in an update of the platform’s networking component, due with the next OpenStack release codenamed Havana later this year.

The second development is Rackspace’s partnership with Cern, which is expected to deliver standards for federating and connecting clouds together, while the third is a bare-metal provisioning technology called Ironic, which is set to enable OpenStack to manage workloads that may not be suitable for virtualised infrastructure.

“The data centre operating system of the future needs to integrate any of the resources in that data centre, because there are lot of technologies that don’t like virtualisation, and it’s about controlling all of them, not just the cloud part,” Beighton said.

Posted in Cloud Hosting

« Previous PageNext Page »