Tag Archive | enterprise

After Hacking Attack, Apple’s Dev Center Site Is Up and Running Again

Happy_mac

Eight days after taking it down in response to a security breach, Apple has restored the website for its Developer Center.

Apple didn’t immediately respond to requests for comment. But the entry page of the site was clearly visible this afternoon. Some sections, like forums, were still offline. Certificates, identifiers and profiles were back online.

An email circulated to Apple developers said, “Thank you for bearing with us while we bring these important systems back online. We will continue to update you with our progress.” It has also added a system status page so members can keep track of what’s back and working and what’s not.

Access to the site had been curtailed for several days as Apple investigated the circumstances of a security incident said to have occurred on July 18.

The company said in an email to its developer community (see below) three days after the incident took place that the site had been accessed by what it called “an intruder.”

Apple said in the original email disclosing the breach that it would be “completely overhauling our developer systems, updating our server software, and rebuilding our entire database.” It hasn’t gone into any further detail about the nature of the attack.

The Apple developer site grants access to iOS 7, OS X Mavericks and other software development tools. When it first went down it was marked with a notice saying it was down for maintenance. A later notice apologized that maintenance was taking longer than expected. Developers were told that memberships that would have expired during the downtime had been automatically extended.

Since extended downtime of this sort is rare with Apple, people in the dev community naturally began to wonder what was up. Apple finally came clean about the attempted attack and said that “…we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed.” Still no word on that.

Here’s the full text of the email sent around to developers.

Developer Certificates, Identifiers & Profiles Now Available

We appreciate your patience as we work to bring our developer services back online. Certificates, Identifiers & Profiles, software downloads, and other developer services are now available. If you would like to know the availability of a particular system, visit our status page.

If your program membership expired or is set to expire during this downtime. It will be extended and your app will remain on the App Store. If you have any other concerns about your account please contact us.

Thank you for bearing with us while we bring these important systems back online. We will continue to update you with our progress.

IBM and Nvidia Team Up on Supercomputing and Servers

nvidia_tesla_gpu

Here’s a second bit of interesting news on the supercomputing front. Computing giant IBM and chipmaker Nvidia are today announcing a significant partnership that will have them teaming up on the design and building of new supercomputing systems and servers.

If you look at the machines on the Top 500 list of the world’s most powerful supercomputers, which was released today, you’ll see IBM and Nvidia popping up quite a bit. Only Hewlett-Packard built more systems on the list than IBM, and not that many more. And Nvidia chips are used to accelerate the computing in 38 of the machines on the list, while at the same time helping to keep power consumption down.

Here’s what’s going on. Nvidia makes a type of chip called a graphics processing unit. Fundamentally, a GPU is really good at doing a certain kind of computation known as a floating point operation. This kind of computing work is useful in rendering the graphics of computer games – which is what Nvidia first built its business on. And floating point operations are mathematically similar to the kind of computation needed to visualize and simulate complex problems for engineers and scientists, and also in creating the visual effects in movies.

Generally speaking, GPUs are better at this kind of computation than your traditional CPU chip, like an Intel Xeon or an AMD Opteron. The difference is that a GPU chip is designed to handle lots of small computational tasks that are carried out all at once, while keeping a lid on power consumption. In computer science, this is called parallel computing, and CPU chips aren’t as good at it as GPU chips. CPUs are better at doing one job at a time, getting it done really fast, and then moving on to the next thing.

So, the important thing to understand is that GPU chips – most of them come from Nvidia, though some come from AMD and Intel, too – have been showing up in the world’s most powerful supercomputers with increasing frequency. They’re the backbone of Titan, which was once the world supercomputing champ and is still a very respectable No. 2 on the new Top 500 list.

Now, IBM – which would naturally like to take some supercomputing business away from HP – is going to work with Nvidia on supercomputers, and also on its Power line of enterprise servers. The two companies said they will share technology that will make it easier for Nvidia’s Tesla GPUs to talk to IBM’s Power8 processors.

Obviously, there’s lot of status in supercomputing. But working with Nvidia will also give IBM something new to stir up its Power server business. While Big Blue takes in more revenue from server sales than anyone else – about $3.3 billion in the second quarter, according to IDC – the fact is that the overall Unix server business is slowing down considerably. IDC reckons that Unix server revenue will decline by more than $1 billion by 2017.

These expensive and specialized Unix machines are increasingly being supplanted by less-expensive Linux machines using industry-standard Intel and AMD processors. Adding GPUs as an option won’t reverse that decline, but it won’t hurt IBM’s efforts to manage it. For Nvidia, it’s a big endorsement of its GPU technology in the enterprise.

Wrike Raises $10M In Funding For Service That Helps The Work Get Done

logowrike

Since its start seven years ago, Wrike has been entirely bootstrapped. It has built a project management SaaS company that has helped it win clients such as headset maker Beats and Ecco, a well known-sandal company. Today, the company has taken a new direction in its effort to scale its business. Wrike has announced it has raised $10 million in Series A funding from Bain Capital Ventures.

Wrike CEO and Founder Andrew Filev said in an email interview that the company did not needs the outside investment in its initial development. Without funding, the company learned how to prioritize and stay lean. Over the course of its seven years, the company grew to about 4,000 customers. To scale, the company will need to become accessible to millions of people. And that means creating a platform that developers can plug into for building apps that people can use from any number of different services:

We already have good APIs and a host of integrations, some built by us, and some built by our partners and customers. Now we want to take it to next level, so that anywhere you do your work online, the tool has a connector to Wrike.
Another area is mobile. We already have a leading mobile app, compared to our key competitors, but there’s so much more we want to do there. Then there’s the core product, there’s enterprise, there are interesting customer requests… and we’re in the very early phase for our market. Wrike should be used by 5 million businesses in 5 years, and there’s a lot of work to get there.

Wrike competes in a space with well-funded competitors like Asana but also established players like Atlassian, which is building out its service for company wide use. To its advantage, Wrike was early to the market, helping it establish itself as a leading provider in the space.

But to build a platform is no easy task. It takes years to develop. But everyone has their own projects these days Their own DIY gigs. We’ll just have to see if we also need Wrike’s kind of tools to get the work done.

Goldman Sachs Invests $40 Million in SugarCRM

moneybags

There’s apparently still some excitement to be found among investors in the business of customer relationship management software. Today, SugarCRM said it has raised $40 million in private equity funding from investment bank Goldman Sachs.

The new funding brings its total capital raised to about $83 million, including about $15 million in debt financing raised earlier this year. Its last venture round, a $20 million Series D, included Draper Fisher Jurvetson, New Enterprise Associates and Walden International.

Antoine Munfa, a Goldman Sachs VP, will join SugarCRM’s board of directors.

I talked with CEO Larry Augustin earlier this week, and he told me that recurring revenue – a key metric for software-as-a-service companies – grew about 30 percent in the second quarter, and has been growing for 15 straight quarters. It added about 600 new customers, bringing its total to 6,500.

SugarCRM’s approach is to offer CRM software that runs in the cloud or on-premise, or in a mixed manner as needed.

You’d think that between Salesforce.com, whose primary application is a cloud-based CRM application, software giant Oracle, and SAP, that the CRM market would be pretty much sewn up. But all those companies do other things, Augustin said.

“We’re focused on CRM, and that’s all we do,” he told me. “Salesforce does a lot of other things. They are expanding to be a broad-based provider of software-as-a-service. We think there is a lot of room for innovation around CRM. We’re not adding a marketing cloud or a service cloud, or things like Force.com. We think there is room for innovation around what CRM can do and how we can help individual seller when they are talking to their customers.”

The funding round, Augustin says, is intended to help SugarCRM get toward an initial public offering, though he wouldn’t say anything about timing for a filing of an S-1 with the U.S. Securities and Exchange Commission. “That’s the question everyone wants to ask, and we just can’t comment on it,” he said.

U.S. Weather Computers Are a Little More Super Today Than Yesterday

IDL TIFF file

If it seems, over the course of the next few weeks, that weather predictions are a little more accurate, then it’s probably not your imagination. It’s just that the computers that the U.S. Federal Government uses to predict the weather have gotten a lot smarter.

The National Oceanic and Atmospheric Administration, the parent agency of the National Weather Service, switched over to using two new IBM-made supercomputers, according to an interesting story from IDG’s Computerworld.

The new machines are capable of 213 teraflops, or 213 trillion floating point operations per second. That’s almost three times the power of the prior systems, which were capable of 74 teraflops. One will be in Reston, Va., and one will be in Orlando, Fla. The systems were “turned on” during a press event in College Park, Md..

While that’s definitely some serious computing horsepower, it’s well shy of the world’s current supercomputing champ, China’s Tianhe-2, which boasts a scorching 33.86 petaflops, or 33.86 quadrillion floating point operations per second.

How will you notice the difference? Since these computers will be the source of pretty much every weather forecast you’re likely to see, including those found on all the weather apps on your smartphone, you may start noticing temperature predictions that are more accurate, especially in the extended forecasts.

Another – and probably more important – change should come in the accuracy of hurricane modeling. Last year, during Hurricane Sandy (pictured), there were criticisms that European forecasters used more accurate computer models to predict where the storm was heading.

As it turns out, there is a bit of a friendly competition going on between NOAA and the European Centre for Medium-Range Weather Forecasts, which had those better computer models. It’s not sitting still, either. It just purchased a set of new Cray supercomputers, but hasn’t yet disclosed their performance.

AWS Adds SDK Support For Windows Phone And Windows Store Apps

aws-logo-640

Amazon Web Services continues to enhance support for Microsoft workloads with added SDK support for Windows Phone and Windows Store Apps.

According to the AWS blog, the new support comes with a Developer Preview of the next version of the AWS SDK for .NET. The release of the SDK adds two new enhancements for.NET developers.

A developer can connect Windows Phone or Windows Store apps to AWS services and build a cross-targeted application that’s backed by AWS. With the addition, AWS now also offers SDK support for Windows as well as iOS and Android.

AWS also added support for its “task-based asynchronous pattern,” which uses “the async and await keywords and makes programming asynchronous operations against AWS more easily to do.”

The support follows AWS efforts to show support for running Microsoft Exchange Server in the AWS Cloud as well SQL Server and Sharepoint.

The new support illustrates the competition among the cloud service providers to become the developer center for all devices. AWS is by far the leader but Windows Azure has steadily added more features for supporting iOS and Android.

Apigee Launches Purchase-To-Payment API Platform

Image (1) apigee-logo.png for post 95145

Apigee has a new platform for customers to manage API-driven business efforts that extends from purchase-to-payment of digital assets. The service is meant for organizations, such as telecommunications providers, that sell services delivered through an API.

Apigee has designed the platform so a customer can get help with pricing, notifications set-up and limits that tell when a number of products have been sold. It comes with an administration platform and developer platform for billing. Licensed on a yearly basis, the platform is available both in the cloud and on-premise.

The communication through the API monetization platform is two-way. For example, telecommunications customers have often had to send email notifications when there was a change to a rate plan for one of its digital services. With the new platform, the service is automated so a customer can set up notifications for the developer subscribing to the plan.

The issue extends to the finance department with API providers historically collecting money by invoice from developers. With the platform integration, a bill gets automatically sent to the customer with real-time credits and deductions to the developer’s account without having to invoice.

In the overall market, there are companies that are digital native and those that do not have the background with APIs. Apigee is trying to serve both markets. They are offering easier API integration for the more seasoned customers and the expertise to show how the service can be offered and managed for the clients newer to the ways of the API economy.

APIs are becoming part of the mainstream business world. Until most recently, APIs have primarily been viewed as a way to connect apps. But they are increasingly used as a gateway for customers to sell services. This is evident in how they are getting baked deeper into enterprise systems. Intel acquired Mashery for $180 million this spring to offer the API platform to serve as a way to connect back-end systems to the cloud.

In essence Apigee is offering its customers a deeper way to automate the selling process and subsequent management of a customer’s digital assets. That’s something we can expect to see more often as APIs move deeper into the mainstream business world.

Disclosure: Apigee’s Sam Ramji needed a place to stay while here in Portland this week for OSCON so he bunked at our house.

BlackBerry Shares Crash on Word of Buyout Bid Failure

blackberry_sinkhole

Shares of the troubled Canadian wireless company BlackBerry fell by more than 16 percent in pre-market trading Monday, following the collapse of an expected buyout bid from Fairfax Financial. CEO Thorstein Heins was replaced, and former Sybase CEO John Chen was named interim CEO and executive chairman.

As of 9:20 am ET, BlackBerry shares were trading at $6.48, down by $1.29. At that price, BlackBerry’s market capitalization will be about $3.4 billion when the markets open for formal trading later this morning.

That would be only slightly more than $1 billion more than the combined cash and short-term investments it said it had on hand when it reported its latest quarterly results in September. If it were to fall much farther, it would be trading at levels near or possibly below the value of its cash holdings, which would imply that the marketplace considers the company essentially worthless.

The one bit of good news, if you can call it that, is that Fairfax said it would lead an effort to inject $1 billion in cash into BlackBerry’s coffers. Fairfax itself will put in about $250 million, calling it a “vote of confidence.”

The Matrix Of Hell And Two Open-Source Projects For The Emerging Agnostic Cloud

heaven

Docker, an app container service from the co-founder at DotCloud, and Salt, an open DevOps platform from the founder of SaltStack, were mentioned this past week at OSCON as two of the most exciting new open-source efforts.

Complexity comes with the cloud and its fit with enterprise data centers. The Docker team calls this new world of services and devices the matrix of hell. The Salt folks see salvation in speed – perhaps to save us all from the hell that comes with heavyweight systems that require extensive resources and are slow due to being built when distributed systems were not as common as they are today.

Both projects are tied to the deeper complexity that comes now with what new DotCloud CEO Ben Golub and Co-Founder Solomon Hykes describe as a world that resembles a matrix, represented by rows of endless number of available services and columns that represent any number of devices where applications run. DotCloud supports the Docker open-source project.

Their emergence also represents the new reality about what can be described as the “agnostic cloud.” Sure, there’s a belief structure about cloud but there is no almighty allegiance to its power. Instead, there is an agnostic movement to make on-premise and cloud services accessible through a universe of providers and open-source services that run anywhere – be it a private data center or a public cloud service.

Docker

Docker automates the deployment of apps as a lightweight Linux container. The container can be built and tested on a laptop and synced to run anywhere. It can run on virtual machines, bare-metal servers, OpenStack clusters, public instances or any combination of on-premise and cloud offerings.

Docker does not port the virtual machine nor the operating system, which makes sense when considering that the infrastructure itself is becoming the operating system. The compute, storage and networking is already in place on a cloud service – the application just goes there to run.

The service avoids the issue that comes with moving virtual machines, which are not designed to move between clouds. So instead of moving the VM, Docker moves the code between the VMs. Most of the security is managed by the Linux kernel.

Hykes said in an interview last week that developers particularly like the capabilities to continually test and integrate app containers. This makes for simpler and faster methods for building applications that can run anywhere. For example, developers are using Docker to build next-generation platform as a service (PaaS) offerings. It’s a noteworthy development. Most PaaS providers have historically provided monolithic platforms to do as much as possible. With Docker, platforms can be built that leverage the services of different providers to create lightweight environments for building and delivering apps.

For more technical descriptions about Docker, there are some good resources here, here and here.

Salt

Salt is a new open DevOps platform built for speed. It is designed to use generic high-speed communication to move data out to nodes by doing parallel data processing. Generic commands get sent to the nodes with feedback coming back very quickly. Harvard University used it for their supercomputer clusters. Jobs that once took 15 minutes now take five seconds.

According to the SaltStack website, Salt can be scaled to tens of thousands of servers through a communications bus that orchestrates, does remote execution and configuration management as well as other tasks.

Salt is being used as a replacement for Chef and Puppet, the two leading DevOps platforms. It is now used by LinkedIn and Rackspace. Here’s an excerpt from a good analysis by Sebastian Kreutzberger, CEO of RhodeCode, an open source software configuration and management platform for Git and Mercurial:

Salt is like a mix of Chef/Puppet (defining states) and an easy way to communicate with machines directly (like with an MQ). The big difference to Chef is the architecture: the slave (called minion) does not pull for changes every bunch of minutes, which can cause weirdness, but has a standing connection to the master which allows instant changes and commands.

Noted often about Salt is its documentation, which has helped the community further develop the platform. Here’s an introduction to Salt by its creator Thomas Hatch:



Conclusion

The cloud and on-premise systems are starting to merge into one cohesive universe. OpenStack serves as a way to make data-center environments more elastic. Cloud services like Amazon Web Services represent the public cloud infrastructure. The PaaS providers are becoming environments for serving apps to these different infrastructures. These agnostic providers, such as Cloud Foundry, do not serve one cloud. They help developers serve multiple cloud environments.

The same is true for services like CloudMunch, which offers a continuous integration platform that can move code between different cloud services. CloudMunch Founder Pradeep Prabhu said this new universal world has three main characteristics:

  • There must be the choice to use any developer or operations tools with any PaaS for any IaaS/cloud or on-premise/private cloud.
  • It has to be workload centric. Whatever makes best sense for a given workload including tooling, patterns and practices and infrastructure/cloud for delivering the best results/roi for that workload.
  • It is the ability to define a customizable software delivery progression with all the checks and balances for both application code and infrastructure code with no lock-in to any tool, methodology or cloud.

Similar principles apply to Docker, which treats the app container as the way to deliver apps to the cloud or any other infrastructure. Salt also fits into this universal mentality.

The new world is not about universal control and beliefs in all-mighty systems. Open-source efforts like Docker and Salt are popular because they fit into this more flexible and agnostic view of the cloud and data center universe.

Image credit: Wikipedia

Follow

Get every new post delivered to your Inbox.

Join 121 other followers