Thursday, July 23, 2009

All we wanted was to run well on Hyper-V...

This week, Microsoft announced that it would be releasing the optimized virtualization drivers for Hyper-V under the GPLv2 license and had submitted them for inclusion into the mainline Linux kernel tree. That was big news, and all this week the open source community and tech-industry as a whole have been trying to fully process it.

Unexpectedly, Vyatta found itself in the middle of this story. For the past few days, we have remained largely silent, but in the last 24-hours or so, news stories have started to circulate that have bordered on putting words into the mouths of both Vyatta and its employees. In this post, I'll try to set the record straight.

On Monday, after seeing Greg Kroah-Hartman's posting to the Linux kernel mailing list announcing that Microsoft had opened its Hyper-V drivers, Stephen Hemminger, a principle engineer here at Vyatta, created a blog post that provided a little background for what happened. It turns out that Stephen was the one who got this whole thing moving.

In his blog, Stephen wrote:

This saga started when one of the user's on the Vyatta forum inquired about supporting Hyper-V network driver in the Vyatta kernel. A little googling found the necessary drivers, but on closer examination there was a problem. The driver had both open-source components which were under GPL, and statically linked to several binary parts. The GPL does not permit mixing of closed and open source parts, so this was an obvious violation of the license. Rather than creating noise, my goal was to resolve the problem, so I turned to Greg Kroah-Hartman. Since Novell has a (too) close association with Microsoft, my expectation was that Greg could prod the right people to get the issue resolved.

It took longer than expected, but finally Microsoft decided to do the right thing and release the drivers.

Stephen's posting is purely factual and also suggests a lot about his personality. Stephen was simply trying to solve a technical problem (help Vyatta's open source networking system run more optimally in Microsoft Hyper-V environments; we already support optimized drivers for Citrix XenServer and VMware vSphere) and ended up running into a licensing problem (Microsoft's drivers included closed-source, binary-only components that were statically linking with GPL components). While many in the open source community would love to play "Licensing GOTCHA!" with Microsoft on the GPL, immediately posting such a discovery on Slashdot to whip everbody into a frenzy, Stephen isn't that kind of guy. Instead, Stephen decided to work the issue itself, not score PR points. He called Greg Kroah-Hartman at Novell and asked him to approach Microsoft about the issue. Fast forward, and Microsoft makes the (right, IMO) decision to open source the Hyper-V drivers.

In a quest for more drama than occurred in reality, some press accounts have suggested that Stephen or Vyatta "accused" Microsoft of a "GPL violation." As you can tell by reading Stephen's blog posting, nobody "accused" anybody of anything. Stephen merely called the situation to Microsoft's attention, again, working the issue, not the PR. There were no threats, no screaming, no broken fingers, no frothing at the mouth. Just a few calm phone-calls placed behind closed doors, out of the limelight and media focus. And that was that. Microsoft noodled on things. And then it decided to open source the drivers and contribute them to the kernel.

So, beers all around, I say. Microsoft just released its first GPLv2 code and made a large contribution to the Linux kernel. Let's raise a toast and welcome them. Using that contribution Linux can now achieve high performance in a Hyper-V environment. Building on that, Vyatta will shortly deliver optimal performance in a Hyper-V environment as well, further securing our place as the leading virtual networking and security solution, supporting XenServer, vSphere, and Hyper-V.

PS: Gordon Haff has a good write-up and analysis over at Cnet.

Wednesday, June 10, 2009

Vyatta + Citrix

Yesterday, Vyatta announced a partnership and a new round of investment with Citrix Systems. I wanted to spend a bit of time here explaining why this is important to the market moving forward.

First, a new round of investment is always a good thing when you're a new company. Beyond the obvious goodness of "cash in the bank," this investment shows that Vyatta is really on to something big, and other people believe that, too. If you're working at a startup right now, you can probably attest that funding rounds are few and far between. In the same way that we say a huge number of startup deaths in 2001 and 2002 as investors sat on their hands and money dried up, the same thing has happened with the recent economic troubles, only far worse. There are many VC industry pundits that are predicting a 50% decline in the number of VC firms when this recession is said and done. With that as a backdrop, only the best companies are able to take in new money. I'm happy to say that Citrix has validated Vyatta's importance moving forward.

Now, the question is, why would a company like Citrix make such an investment. The short answer is that there is a lot of synergy (no pun intended for those who attended Citrix's Synergy conference last month in Las Vegas) between Vyatta and Citrix. Citrix is focused on building the best application delivery infrastructure possible. Citrix has application products, services like GoToMeeting, a number of appliances such as NetScaler, and virtualization infrastructure in the form of XenServer. Vyatta compliments those products in several areas.

First, Vyatta works well with Citrix's existing appliance products. Put Vyatta and NetScaler together, for instance, and you have a very effective data center networking and application delivery optimization solution.

Second, Vyatta works well with XenServer. Vyatta has always virtualized well; today, we run on XenServer, VMware, open source Xen, Hyper-V, and even some things like Sun/Oracle VirtualBox (though we only officially support XenServer, VMware, and open source Xen). I think it's now clear that virtualization is a key technology that will impact almost all areas of IT infrastructure, from the data center to the branch office. Further, virtualization is a key technology in the implementation of cloud computing. Citrix has been very strong in putting forth a cloud strategy with its Citrix Cloud Center (C3) architecture and product portfolio. Last month, Citrix announced NetScaler VPX, a virtual appliance version of NetScaler. In the same way that Vyatta compliments NetScaler and other Citrix technologies in the physical world, Vyatta is a perfect compliment to C3 products and technologies in the virtual world. To show how Vyatta might be used in some cloud scenarios, Citrix has included Vyatta in a couple of its C3 Blueprints.

So, there you have it.

Linux 2.6.30 Kernel Statistics

Once again, Stephen Hemminger kicks ass and Vyatta makes the list of key contributors to the 2.6.30 kernel: Thanks, Stephen.

Friday, May 22, 2009

Rumble in the desert

Tom and I just got back from Interop last night. There was a great panel session that I participated in, titled "Is Routing Undergoing a Mid-Life Crisis?" The other panelists were Juniper and Cisco.

Sean Kerner from happened to be there in the front row and blogged about the session:

The short of it is that I basically stated unequivocally (using props as you'll see in the picture in the blog post) that the emperors have no clothes. While Vyatta doesn't do everything and there are still big portions of the market where Cisco and Juniper have good solutions (high-end carrier routing, for example), if Vyatta can do the job for you (in low to upper-mid-range routing and security, for instance), we're a flat-out better solution. (And over time we'll also get to high-end carrier routing. ;-) )

Anyway, I'm going to try to dig up the audio transcript, but many people came up after the session was over to get a free Vyatta CD and told me that they thought this was the best session of the conference.

Monday, May 04, 2009

Gross Profits = Sweet Savings

This week, Cisco will be announcing its earnings again. If this is a typical quarter, Cisco will come in with approximately 63% gross profit margin. As we have discussed previously, that's a VERY high gross margin. If you're a Cisco investor, you love 63% gross margins. If you're a Cisco customer, however, you really don't love it; this means that 63 cents of every dollar you spend with Cisco is gross profit.

Never being ones to sit around idle while customers are overpaying, Vyatta decided to do something about it. Today, we announced the "Gross Profit -- Sweet Savings" promotion. All week, we'll be accepting registrations for the promotion. On Wednesday, after Cisco announces its earnings, we'll compute Cisco's gross profit margin from 1Q09 and send out a special discount code via email, good through Friday evening (Pacific time). The discount will depend on Cisco's gross margin. For instance, if Cisco announces a 63% gross margin, customers will pay 63% of list price for a Vyatta subscription (a 37% discount). All the terms and limitations are spelled out on the registration page.

So, register today and look for an email on Wednesday, after Cisco announces its gross profit margin. It's a great way to score a Vyatta subscription for less. If Cisco's investors are cheering, why can't you?

Tuesday, April 21, 2009

Mickos agrees

After posting my thoughts on the Oracle/Sun deal yesterday, it was interesting to see former MySQL CEO Marten Mickos's take on the deal: Why Oracle Won't Kill MySQL. Mickos's thoughts parallel my own: Oracle will use MySQL to fend off Microsoft in the low end and provide a way to capture developers and bring them into the Oracle fold.

Monday, April 20, 2009

Oracle Buys Sun

Well, well. After all the speculation that IBM was buying Sun, it looks like Oracle finally moved the transaction across the finish line.

Previously, it looked like the IBM/Sun deal fell apart over price and possibly strategic fit. Originally, in early April, the Wall St. Journal said the deal was going to be at $9.55. Later, IBM offered $9.40. The Sun board balked at this and IBM subsequently withdrew the offer.

With the latest news, it looks like Oracle agreed to a price of $9.50, cash, bringing the final price to about $7.4B.

Previously, I had said that I didn't get the IBM/Sun deal. With Oracle, the fit seems a lot better, but there are still some questions. The thoughts going through my head are:

  1. From a technology standpoint, the Oracle/Sun deal feels a lot more complimentary than with IBM/Sun. Oracle and Sun have always been strong partners, with Oracle's DB running well on Sun's hardware and software (SPARC/Solaris). Further, Oracle has built a lot of its applications on Java technology.
  2. With the purchase of Sun, Oracle now competes more directly with companies like IBM and HP. In the same way that you could buy DB2, running on AIX or Linux, running on IBM X- or P-Series hardware, you'll now be able to buy Oracle, running on Solaris or Linux, on Sun SPARC or x86. At the performance-oriented, high-end of the database market, this will be a big deal and will probably deliver great long-term performance gains. Oracle can now controls and can tune all levels of the stack.
  3. The only significant overlap between the companies is with the database (Oracle vs. MySQL) and the operating system (Oracle's Linux variant vs. Solaris).
  4. While some people fear for MySQL under Oracle's leadership, I'm not sure that they should automatically worry. The reality is, Oracle was going to lose a lot of database business to MySQL either way, and if not MySQL, it would have been PostgreSQL or even low-end commercial options like Microsoft SQL Server. IMO, it's far better to have MySQL under Oracle's control so that it can be better positioned as an on-ramp to the larger, more lucrative Oracle DB for high-end applications as well as a spoiler for Microsoft. Imagine, for instance, if Oracle embraced MySQL and created a set of tools that allowed developers to work seamlessly with either DB from an application point of view. MySQL could be used during development or at smaller, departmental scales, with minimal conversion to full Oracle licenses as the apps went into production and scaled up. In short, killing MySQL won't save Oracle from the other open source and low-end threats, so if it's smart, it will use MySQL to get developers on the Oracle bandwagon and then keep them there with seamless tools that work on the big DB.
  5. Bigger questions remain for software packages like Open Office. I'm sure that Oracle (and Larry Ellison) would take no greater pleasure than making the office-suite market a no-profit-zone for Microsoft. Currently, MS Office is one of Microsoft's biggest cash-cows, and so cutting into that revenue stream would help starve Microsoft dearly. That said, how much is Oracle willing to invest to do this? Clearly, Open Office is an orthogonal play to the mainline database business, with little tie-in visible at first look. Still, there might be some sort of connection that could be created if Oracle looks hard enough. This would take time, however, and it isn't clear whether Oracle would be willing to carry the development costs. That may mean that Open Office would be sold or spun out into separate entity that would have to find its own revenue stream, much as Netscape did with Mozilla. Killing it outright seems like a waste given that it still does provide competition for Microsoft; the main issue is simply the funding model to keep that going.
  6. Next, there is the question of Solaris vs. Linux. Oracle already has its own Linux variant, a clone of Red Hat Enterprise Linux. My gut tells me that the answer here is "Yes." They'll keep both. Oracle can't afford to drop support for Linux, in the same way that Oracle still runs on HP-UX, AIX, and Solaris. I would expect that Solaris will get the bulk of the performance tuning enhancements, however, with Linux relegated to a back-seat role for those who want to run on commodity hardware and software stacks.
  7. Finally, the biggest question in my head is whether Oracle knows how to, or even wants to, manage a hardware business. Oracle is currently a very profitable software business. Getting into the hardware business will inevitably drag down that overall profitability. There are a couple of options here. They could spin off the hardware business, but that would presume that Oracle doesn't want the hardware business when it buys Sun. That feels like a stretch. Sun doesn't have enough interesting software assets for Oracle to purchase the company solely on that basis, I think; Sun is fundamentally a hardware company. Second, Oracle could focus the hardware business, cutting some of the expensive development. For instance, rather than pursuing continued high-end SPARC development, it could simply focus the Sun hardware divisions on x86-based products. That would cut a lot of development costs yet still keep the company with hardware products. The trade-off would be that Oracle would be limited to whatever performance it could eek out of standard hardware, which might not be where it wants to go. Processors like Niagra might be part of the interest for Oracle, to address high-end applications. Another alternative would be to go the other way and cut all the commodity hardware and only focus on the high-end, more profitable hardware. In the end, whatever the decision, the biggest question is whether Oracle can run a hardware business competently. The company is clearly professional but it has never demonstrated hardware savvy before. Will Sun's hardware engineers remain at Oracle, where hardware is secondary to software? There is a lot that could go wrong or be mismanaged, even with the greatest of intentions.

Am I right or wrong? Who knows. There is a lot of industry speculation today. We'll see as this plays out. The one thing that is clear is that the first half of 2009 has been an interesting time in the IT space. First, Cisco announces UCS and forces an industry-wide realignment, and now Oracle buys Sun and does the same thing. It's only April. By December, the world could be vastly different.

Terminated with Extreme Prejudice

Well, I had a snafu all last week. The anti-spam robots over at Google decided that this blog met their criteria as a spam blog and decided to disable it. After much clicking on various links over at Blogger to petition for a review, the gods decided that I was worthy and it seems I'm back online again.

After going through this whole experience, here is some feedback for Google/Blogger:

  1. First, let me applaud your efforts to go after spam blogs. Spam is the scourge of the Internet. At the scale at which Google/Blogger operates, the process of detecting spam blogs has to be automated. With any form of automation, some mistakes will sneak through, either false positives or false negatives. I understand that. So, my first thought at being flagged as a spam blog was, "Okay, no biggie. I'll just petition for a review and we'll get this taken care of." Unfortunately, you made the process unnecessarily painful and disruptive.
  2. My biggest beef with Google/Blogger is that the process for requesting a review is a bit of black magic. After the bots categorized my blog as spam, I was sent a single email saying that I had 20 days to request a review, otherwise my blog would be disabled. Naturally, I missed the email and didn't find out about the issue until I logged into Blogger to create a new post. Still, I was at the early part of the review phase before the blog had actually been disabled and there was a clear notice that it would be disabled if I didn't request a review, but missing that first email was a problem as it cut into the 20-day review period.
  3. After logging into Blogger and finding out about being flagged, I immediately requested a review. Again, "No biggie," I thought, "because they have plenty of time to do the review. They'll just pop open a browser, look at the blog, and it will be obvious that it's not spam."
  4. After requesting the review, I got absolutely no feedback from Google/Blogger that the review was happening or even that the request had been received. There was no communication whatsoever. When I logged into Blogger, it would say that a review had been requested on a particular date, but that was it.
  5. After 20 days or whatever, the blog was simply disabled. No warning. No additional emails either right before or after the disabling saying that the blog was about to be disabled or even that it had been disabled. No nothing. I found out that the blog had been disabled from a reader who sent me a message.
  6. Without any communication from Google/Blogger, I was left wondering what happened. Was the review completed, but rejected? Was the request to review ever received? Was there a bug in Google's request or review process? Was in in limbo?
  7. I immediately went online and started checking all the Google help links. There was no help there, other than to say to request a review if your blog got flagged as a spam blog, and one more link to be able to generate another review. There was no appeal procedure, no additional help. In particular, there was no way to reach a real human. The only option to me was to post into Blogger's support group and grouse about things there. From what I can tell, this is a popular past-time, because there were a lot of other people doing the same thing.
  8. What would have been more helpful is a steady stream of update email from Blogger after I requested a review. I would be nice, for instance, to get an email confirming the review request. Then, it would be nice to get a status email periodically (weekly?) saying when the review would be completed. I have no idea whether reviews are completed by humans, or just another bot. I have no idea how long the queue is. In fact, I have no idea whether a review actually took place for my blog during the 20-day grace period before the blog was disabled. If it did, it would have been nice to receive another email stating that the review was complete and the result, whether rejected, approved, or whatever.
  9. In short, COMMUNICATE. There is no substitute for letting people know what is happening, even if you can't give them good news. Also, make your review procedure as transparent as possible. Tell people what to expect ahead of time. Then deliver to that expectation in terms of communications, timeframes, etc.

My main conclusion from this whole thing is that Blogger is a risky deal for corporate blogs where uptime is critical. Bloggers might be better going with a non-Blogger solution, either paid hosted or locally installed (e.g. Wordpress). Google's procedures are designed to achieve a reasonable service level a high levels of scale, but to do that they deliberately deflect all inquiries for a real human (FAQs and forums for support) and rely on automated infrastructure (spam bots). When things have really gone wrong, that simply doesn't work. I walked away from this with a profound sense that my data exists on Blogger at Google's whim, and even if I haven't violated any terms of service, my data can be snuffed out at a moment's notice with little to no appeal possible.

Tuesday, March 24, 2009

Dear Mr. Schwartz

Dear Mr. Schwartz,

I was recently reading your blog and found the series of videos you created explaining Sun's strategic direction with respect to systems, software, and open source. Purely from standpoint of presentation, I'll tell you that the video format works well for you. The fireside-chat-like atmosphere comes across quite well.

In listening to your presentation in video 3, I was also excited by your statements about the coming fusion of networking and servers. You specifically said:

As I've said before, general purpose microprocessors and operating systems are now fast enough to eliminate the need for special purpose devices. That means you can build a router out of a server - notice you cannot build a server out of a router, try as hard as you like. The same applies to storage devices.

To demonstrate this point, we now build our entire line of storage systems from general purpose server parts, including Solaris and ZFS, our open source file system. This allows us to innovate in software, where others have to build custom silicon or add cost. We are planning a similar line of networking platforms, based around the silicon and software you can already find in our portfolio.

We believe both the storage and networking industry's proprietary approach, and their gross profit streams, are now open to those us with general purpose platforms. That's good news for customers, and for Sun.

Wow! That's great validation for what Vyatta has been saying for a few years now and I'm glad to see Sun joining our cause. I wholeheartedly agree with your overall analysis. In particular, I agree that if there is to be a fusion of networking and servers (and I think it's now obvious to the industry that it's going to happen), it's definitely going to happen on server hardware, not the other way around. I also agree that the gross profit margin of the proprietary networking vendors is exceedingly high and will be the subject of a forthcoming correction, and that it's good for both customers as well as the new vendors exploiting the benefits of open hardware and software (notably, Vyatta, and perhaps Sun as well as you begin to execute to your strategy).

That said, I think you're probably optimistic that it's going to happen on Solaris or Open Solaris. Sure, you guys have invested a lot into Solaris, and it is a pretty cool system, but the reality is that most everything Solaris has, Linux has also. What few things Solaris has that Linux doesn't (Crossbow and ZFS, for instance) will have rough equivalents in Linux shortly. The converse is not true, however; the main thing that Linux has that Solaris doesn't have is market momentum, and that can't be added to Solaris quickly. Because of this, I believe that the networking systems of the future are going to be built on a Linux foundation, not Solaris, as cool as it is.

Further, these Linux-based networking systems will not be managed like a Unix system, but rather will be built to familiar interface standards for the target audience. Put another way, in order to speed the adoption of this new model, we'll have to deliver more than high performance and a great cost structure; we'll have to deliver these systems with a network-appliance-like interface rather than a traditional Unix-like interface. You cannot simply take Solaris, run a routing stack on top of it, and call it a router, even thought it fundamentally may be quite capable of routing packets well. Any management paradigm that involves a user editing a plethora of configuration files in vi or emacs and then restarting daemons is doomed to a niche position at best because it simply won't fit into the existing management workflows prevalent in the industry today. Because of this, Vyatta has spent a lot of time creating a management paradigm for our system that delivers a standard network appliance look and feel.

So, Mr. Schwartz (and Sun), in summary, welcome to the party. Let's go get 'em.


Dave Roberts
Vice President, Strategy and Marketing

PS: Can you give me any confirmation of the IBM buyout rumor floating around? Is it going to happen or not? If you could lay out the rationale from the buyer's side, that would also be interesting for a lot of people. Some of us still don't get it, but I'll admit to not having a seat at IBM's latest corporate strategy executive summit.

Vyatta's contribution's Jonathan Corbet wrote a nice article last week detailing where the work for the 2.6.29 Linux kernel is coming from. Reports like this get created from time-to-time from the git check-in data for the various patches and contributions that make up the kernel source code.

I'm pleased to report that Vyatta ranked very highly in the latest survey. In particular, Stephen Hemminger, Vyatta's resident kernel wizard extraordinaire, was ranked #4 in total changesets contributed to 2.6.29. Stephen is very involved with the Linux networking subsystem and many of the changes related to changes in the Linux networking APIs and fixing broken drivers to conform with the API changes. Thanks to Stephen's work, Vyatta also comes in at #14 on the list of changesets contributed by corporation. This is fairly significant when you see that the company we were keeping on the list includes mega-Linux supporters like Red Hat, Novell, and IBM, and then other mega-corporations like Oracle, Intel, and Nokia.

Sun and IBM: the rumors continue

Hmmm... Well, the Sun/IBM deal is looking more and more real. The WSJ reported on Friday that IBM lawyers were on a due-diligence mission, combing through Sun's various contracts to make sure things are up to snuff. That's pretty good confirmation. You simply don't let another company's lawyers (your competitor's lawyers, in fact) go through your contracts unless there is a good reason. Good reasons include acquisition.