Compaq, HP, IBM, Intel and Microsoft Create New PC Security Alliance

| Wednesday, August 18, 2010

On Monday, October 11, Compaq, Hewlett Packard, IBM, Intel and Microsoft announced the launch of a new alliance, the Trusted Computing Platform Alliance. The Alliance has chartered itself with the mission of developing a new hardware and software specification to enable technology companies to use a more trusted and secure personal computer platform based on common standards. Alliance Chairman, David Chan of Hewlett-Packard says, "This workgroup was formed to define the necessary set of capabilities for a security subsystem that would allow a system integrator and solution provider to establish trust on a hardware platform." The Alliance also stated that "personal computers lack a standard set of system hardware-based functions needed to establish trust on the platform."

The cited mission is somewhat nebulous. Are they trying to help Microsoft learn how to secure their widely publicized operating system security holes? Are they trying to develop or certify a PKI (Public Key Infrastructure) solution? Or are they trying to develop desktop and server security standards for systems integrators and solution providers? Whatever their mission is, they plan on creating a proposal for a security specification of sorts by the second half of 2000. Their plan is to make the specification available through licensing subject to proper verification and implementation.

Market Impact

In a world of co-existing truths, it is likely that there are multiple purposes behind this alliance. Microsoft needs to gain consumer confidence in the security of its operating systems, and having two high profile Unix vendors, HP and IBM, on its side is certainly a good starting point. Compaq, HP, and IBM all want to sell servers, and without the confidence of a secure operating system, many organizations today who want a turnkey commercial off-the-shelf server solution are turning to vendors like Sun Microsystems and Novell. E-commerce is the prevailing internet market driver, and without security, financial transactions are a risk and a liability that smart businesses and organizations are not willing to take.

Though the Alliance may be hedging towards putting more security in the BIOS, there are no easy and quick short-cuts to securing information technology infrastructure. Most security experts agree that using a layered security model is the best approach. A layered model secures an organization's network, operating systems, and applications. According to Marcus Ranum, CEO of Network Flight Recorder, and the person most often credited for developing the first firewall, "What it seems they're saying is that they're going to develop hardware specs and BIOS extensions that will enable certain security services to the operating system. That's nice but if the operating system isn't good, security-wise, it won't matter what the hardware provides."

If nothing else, the formation of this alliance is sure to heighten security awareness in the information technology sector as a whole. Elias Levy, Chief Technical Officer of Security Focus and moderator of the well-known Bugtraq security mailing list says, "The alliance is a good idea and has potential. There is a great need to build security features into the basic structure of the computer and the operating system. Only when these features become universal will application writers start making use of them benefiting the end user. Although it is still too early to tell what the exact deliverables are that the alliance hopes to produce, it is encouraging to see these important companies at least attempting to solve some these security issues."

User Recommendations

The Alliance invites other companies to participate in helping to architect its mission. If your organization has anything to offer the Alliance, applications for membership are currently being accepted. With such a lofty agenda, and aggressive delivery intentions, the Alliance will certainly need all the help it can get. In the meantime, users should not hold their breath. The first step to take in securing an organization's network is to have a security vulnerability assessment done as soon as possible. In light of the rapidly increasing network and system security break-ins, it would behoove any organization that has confidential information on its network to analyze their risks and take due precaution as soon as possible.


SOURCE:
http://www.technologyevaluation.com/research/articles/compaq-hp-ibm-intel-and-microsoft-create-new-pc-security-alliance-15289/

Compaq's 8-CPU Intel Servers: the New "Big Iron"

|

In late August, 1999, Compaq started shipments of its eight-CPU Intel servers, the ProLiant 8000 and 8500. The ProLiant 8X00 series is part of the next generation of Intel servers (along with offerings from Dell, HP, IBM, and others) which utilize the Profusion chipset. (This chipset allows servers to break through the previous limitation of four CPUs for the Intel architecture.) Although both products are geared toward the enterprise computing segment, they address different areas within that segment the PL8000 can function either as a standalone or in a rack, the 8500 must be racked, and needs other hardware (primarily disk drives) to support its configuration. Whichever model is chosen, these products are aimed at large datacenter/data warehouse environments, as well as other large-scale computing environments. This product will also be used to consolidate and upgrade existing servers.

Compaq's main competitors in this space are Dell, HP, and IBM. There are other vendors producing eight-way Intel servers (e.g. Unisys, Hitachi), but we do not believe they will be (serious) market share competitors. (Market share figures for Intel servers is shown in Table 1 and Graph 1.) In general, the Intel server market is growing, and these products will satisfy pent-up demand, but we do not expect the volumes to be significant (when compared to four-way servers) until next year.


Compaq is positioning the ProLiant 8X00 series to address a number of markets:

1.

External to the customer: ERP, E-commerce
2.

Internal to the customer: mail and messaging, terminal servers
3.

General: Data warehousing, datacenters

Compaq is highlighting a number of areas where it feels it has a competitive advantage: Performance, price/performance, and technology. In addition to their traditional strength in price/performance and performance, Compaq has an inside track on Profusion's design, due to their co-development efforts with Intel and Corollary (developer of Profusion, bought by Intel in 1996). Since this chipset is the heart of the eight-way architecture, Compaq has gained a short term advantage.

Because of the relatively low price approximately $20K base price (vs. $7-$8K for a four-CPU base unit) -some "cannibalization" of four-CPU markets is expected.

Although the eight-way servers (in general) are now the "biggest kid on the [Intel] block", this position is expected to last only until Merced/McKinley arrive 12 months from now for Merced (80% probability), two years for McKinley (60% probability). Since McKinley, not Merced, is expected to provide the performance leap, this should give the current eight-way servers approximately 18-24 months at the top of the Intel scale. After that, these systems become "mid-range" products. Until Merced ships, we expect the worldwide market size for eight-way servers to be approximately $5-$8 Billion. (Note: Merced will not immediately "cannibalize" the market for eight-way servers, because of the change from the current IA-32 architecture to Merced's IA-64 architecture. This change will effect much more than hardware, and therefore migration will not be immediate.)

Product Strengths

8000:

Feature Set/Flexibility: The ProLiant 8000 is presently the only eight-way server from the "Big Four" server manufacturers which can stand alone all the others (including the ProLiant 8500) are rack-based. (So is the 8000, at 14U high, but there is a tower conversion kit for it.) Additionally, the 8000 can house up to 21 disk drives, allowing lots of raw storage space, but also providing the flexibility for a large RAID setup.

Price/Performance:Based on present $/tpmC results from the TPC, Compaq continues to be a price/performance leader ($18.70/tpmC). We expect this leadership to continue, with the only serious competition expected to be from Dell. The raw performance numbers are also very good (>40,000 tpmC), but we expect Dell to post similar numbers within three months (60% probability).

8500:

Storage: The 8500 has four hot-swappable hard drive bays, more than any major competitor (except the ProLiant 8000). Although it is not a targeted application, this capability does allow the customer to have an internal RAID setup.

Serviceability: The 8500 is almost completely modular: all of the major components Main Logic Board (MLB), power supplies, fans, I/O cards, hard disk drives - can be swapped (by the customer) quickly, without tools. This also allows a customer to install a 20-lb. chassis at the top of a six-foot-high rack and add subunits one at a time this is in contrast to trying to mount a 100+ lb. unit, as has been typical in the industry.

Price/Performance: The 8500's figures ($18.46/tpmC) are even better than the 8000, and are approximately $1.50/tpmC better than the Unisys Aquanta E2085. However, we expect Dell to post figures similar to Compaq's within three months.

Size: At 7U high, this system is the same size as Compaq's four-way offerings (except the PL 6400R), and thus capable of a "box upgrade" (euphemism for "pull out the old system box, put in one of these") for earlier ProLiant models, or for systems made by Dell and HP. IBM, at 8U high, is at a competitive disadvantage here.

General:

Technology: As mentioned earlier, Compaq co-developed the Profusion with Corollary/Intel. This has already provided Compaq with a slight (~2-3 weeks) advantage with regard to which manufacturer ships eight-way systems first. However, we expect more significant benefit to come from the system's logic design, and any performance or feature advantages that Compaq's engineers can design into the system.

In addition, Compaq's use of "heat pipes" (a cooling technology) allows slightly greater design flexibility (and greater thermal margin) vs. the more conventional use of extruded aluminum heatsinks.

Service/Support: ProLiant servers benefit from Compaq's service/support capabilities (from the Digital acquisition). Although customer-serviceable components are becoming the norm, there is still a strong need for vendor support.

Product Challenges

8000:

No Integrated SCSI Controller Although performance issues make it desirable for customers to buy add-in SCSI controller(s), they should not be required to do so just to get a working storage subsystem.

Ergonomics The CD, floppy, and removable media devices are in the wrong place if the unit is freestanding who wants to bend that far over just to put in a CD or diskette or tape (for backup)?

8500:

No removable media devices: The box is tightly packaged, but having at least one additional media bay would be beneficial to customers.

RAM Capabilities: Profusion can support up to 32GB of RAM. Although Compaq feels customers will never need/use more than the 16GB Compaq provides, having the flexibility to expand would be helpful to a customer.

General:

PCI Slots: Compaq provides only two 66 MHz slots, half of its competition. Since the same I/O board is used in both products, this comment applies to both.

Corporate Issues: Dell has been gaining ground on Compaq, and Compaq has been "distracted" due to problems with the Digital acquisition. Those problems appear to be subsiding, and we expect Compaq to return to its former focus within 12 months (70% probability), but they are not there yet.

Vendor Recommendations

It is not yet clear that customers want both a rack and a freestanding-optional product. Since the high-end server marketplace will be dominated by rackmount systems (See TEC's Research Note: "High-End Wintel-based Rackmount Servers The Big Get Bigger" August, 1999), it is not clear that a 14U-tall rackmount-which-can-convert will have a sufficiently large market presence. People who need lots of drives can buy storage expansion enclosures to mount underneath a 7U-high server, with little (if any) feature set loss. However, since the mechanical design was just a modest repackaging of the PL 7000, Compaq probably thought the monetary risk was acceptable. Other than this issue, the product is positioned appropriately.

Compaq should modify its PCI implementation by adding more 66 MHz slots, and add at least one more slot overall. These changes will give them parity in almost every feature category, and the clear winner in others.

Compaq should leverage everything it can from the Corollary/Intel co-development relationship this can provide long term benefits, either through quicker product development or through development of more feature- and performance-rich systems.

Finally, Compaq should use its current time advantage as much as possible with an aggressive sales campaign, and consider using the expected Merced delays as a lever, while carefully balancing against the potential alienation of Intel.

User Recommendations

These products are good choices for clients who have high-end computing environments, such as data warehouses or server consolidation requirements. The feature set and hardware reliability features are excellent, and the only technology concern is based in the Profusion chipset, due to its newness. However, Compaq's co-development relationship should reduce Profusion-related concerns.

The ProLiant 8500 is a better choice for those users who either need the flexibility of mixing and matching components in a rack, or who are still unsure of what their needs are. We believe that the ProLiant 8000 is appropriate only for those users who know they need a self-contained solution, especially if it is the only server they plan to buy. If customers need more than one or two servers, then the 8500 is the better choice.

In addition, the lack of an integrated SCSI controller and small amount of 66 MHz PCI slots can be used by the customer to gain concessions in other areas.


SOURCE:
http://www.technologyevaluation.com/research/articles/compaq-s-8-cpu-intel-servers-the-new-big-iron-15170/

Transmeta to Intel/AMD: Eat Our Dust

|

September 26, 2000 - In a recent interview, Transmeta Corporation CEO David Ditzel boldly stated that his company's technology is at least five years ahead of that of Intel and AMD, the two leading CPU vendors in the PC marketplace. Transmeta's "Crusoe" CPU was announced in January. Among Crusoe's notable features is its claim of super-low power consumption, resulting in much longer battery life for notebooks and other similar battery-powered computing devices. Current notebooks usually run out of power after 3-4 hours; Transmeta claims Crusoe-based notebooks can run up to eight hours without recharging. This extended life will (theoretically) allow cross-country fliers to work (or play Quake) for the entire flight.

Market Impact

Although we understand Mr. Ditzel's reasoning, we think he is being a little too optimistic, relative to the actual lead Transmeta has on everyone else. Intel has already shipped CPUs with SpeedStep power management, and AMD's PowerNOW! technology is being used in Compaq notebooks. Crusoe's published specs indicate it still has a battery-life advantage over both of those technologies, although audited comparison tests are sparse. While we tentatively agree that it may take a new chip design for the big guys (AMD/Intel) to emulate Transmeta's architectural philosophy (off-loading hardware functionality onto software), we think the development time would be closer to three years (maybe less), not five years. In the computer biz, five years is approximately equal to "forever".

On the plus side for Transmeta: we do not see them repeating the mistakes made when the Alpha chip was first produced. Digital Equipment, which had hoped to make Alpha a serious challenger to the Intel architecture, neglected to get enough strategic alliances lined up early enough, resulting in a processor with tons of power, but few applications and fewer resellers. Transmeta has already lined up Sony, Fujitsu, Hitachi, and IBM as partners, and we expect them to line up more in the coming months. There are other interesting technological features in Crusoe, but it's rare that technology alone causes a sea-change in a market like this (viz. Alpha).

A longer-term problem for Transmeta will be the choice of markets: they're presently targeting the notebook segment, but we're not sure where they'll go beyond that. Ditzel is (presently) content to stay out of the server and desktop market, and focus on the notebook and mobile devices. We think Crusoe will have greater success in the notebook market - mobile devices (Palm, Pocket PC, and the like) have a number of processors already available, and it's unclear that the extra battery life is a big selling feature for the smaller devices.

User Recommendations

Until Transmeta's claims can be verified on a production-grade notebook, potential corporate customers should watch, but not buy.

If Crusoe-based notebooks demonstrate battery life on a par with their claims, then these notebooks have the potential to be a highly valuable tool for "road warriors" - employees usually on the road, such as salesmen.

Customers intending to buy a notebook as a "desktop replacement" will get only modest benefit from the low-power aspects of Crusoe: battery life is relatively unimportant in those situations.

Non-notebook mobile device users: you'll just have to see what becomes available. For now, Palm-based devices, Pocket PC devices, and Internet-ready cell phones are just fine.


SOURCE:
http://www.technologyevaluation.com/research/articles/transmeta-to-intel-amd-eat-our-dust-16147/

Compaq, Dell Announce Eight-Way Intel Servers

|

Compaq Computer Corporation (August 17th) and Dell Computer Corporation (August 23rd) announced shipment of their new eight-way (8 CPUs) servers, based on Intel's Profusion chipset. (HP and IBM expected to ship by the end of September.) Early performance figures show an approximate doubling of performance (relative to four-way), and a narrowing of the gap between Unix and Intel/NT servers.

Market Impact

This means that Intel/NT servers are starting to move into the performance band historically owned by Unix servers. As this trend continues, non-Intel servers will have a tougher time gaining customers in anything but the very-high-end market. Four-way servers now get pushed down into the mid-range segment, which will result in price pressures. Compaq and Dell have a slight timing advantage (re: shipments) - IBM and HP, although they announced their competing systems in June, are not expected to ship systems until late September. This will reinforce the idea that the Intel server battle is now between Compaq and Dell, with everyone else trying to catch them. Demand is not expected to be high immediately, but is expected to grow - especially if Merced slips beyond September, 2000 (20% probability), and customers look for a transition product to "tide them over" until Merced ships. In addition, eight-way servers will hasten customers' server consolidation, replacing four older two-CPU models with one eight-CPU. (Note that this consolidation will not be appropriate for everyone, nor should it be.)

User Recommendations

Because of the delays due to the release of the Profusion chipset, customers may want to exercise caution, or at least ensure they have adequate guarantees, before purchasing an eight-way server. (Profusion may be robust, but customers should always be wary with any new technology.) If the systems are shown to be robust, then customers should also assess their needs to see if an eight-way provides sufficient returns, or if it is just to have the latest, best systems.



SOURCE:
http://www.technologyevaluation.com/research/articles/compaq-dell-announce-eight-way-intel-servers-15345/

Flaw in Intel Xeon 550 Chips: Shipments Stopped

|

Sep 24, 1999 - Intel Corporation officials confirmed that the company has put a hold on shipping two versions of its 550-MHz Pentium III Xeon chips to OEMs for at least two weeks because of a bug that is causing eight-way servers to freeze on boot-up.

To this point the glitch has been limited to versions of the chip containing the 512Kb and 1MB caches. Vendors such as Compaq, who have chosen not to utilize the Saber motherboard, have not experienced any problems to date.

Market Impact

In the short term, this will put a damper on the overall eight-CPU Intel server market, because vendors will be unable to ship systems, and some customers will delay purchases until they see that the problem has been fully resolved. This puts Dell Computer and Hewlett-Packard at a disadvantage, since they use the Saber board set as the core of their servers. Compaq Computer will have both a short-term advantage and potential disadvantage. The disadvantage arises from the possibility of the marketplace not differentiating between the Saber-based and non-Saber-based servers, thereby lumping Compaq's unaffected servers with those that are affected. Compaq's advantage comes from having an even greater head start on Dell and HP. The long-term effects on the overall market (assuming Intel solves the problem 95+% probability) will be negligible, since the overall demand for eight-way servers will not decrease. Compaq should accrue some long-term benefit, especially if customers decide that Compaq's development engineers add more value than the competition does.

User Recommendations

Users planning an eight-way Intel server purchase (from a vendor using the Saber motherboard) should consider delaying that purchase until Intel has adequately demonstrated that they have solved the problem. Although caution is always prudent when a problem like this surfaces, users planning to buy from Compaq should be less concerned.


SOURCE:
http://www.technologyevaluation.com/research/articles/flaw-in-intel-xeon-550-chips-shipments-stopped-15338/

Dell's 8-CPU Intel Servers Increasing Its Enterprise Focus

|

In late September, 1999, Dell Computer Corporation began shipments of its eight-CPU Intel server, the PowerEdge 8450. The PowerEdge 8450 is part of the next generation of Intel servers (along with offerings from Compaq, HP, IBM, and others) which utilize the Profusion chipset. (This chipset allows servers to break through the previous limitation of four CPUs for the Intel architecture.) The PE 8450 is based on Intel's OCPRF100 server (also known as "Saber"), which Dell has modified to improve its serviceability and to add an improved peripheral bay.

This product is geared toward the enterprise computing segment. As with most enterprise-class servers, the 8450 must be racked, and needs other hardware (primarily disk drives) to support its configuration. This product is aimed at the large datacenter/data warehouse environments, as well as other large-scale computing environments. The 8450 will also be used to consolidate and upgrade existing servers.

Dell's main competitors in this space are Compaq, HP, and IBM. There are other vendors producing eight-way Intel servers (e.g. Unisys, Hitachi), but we do not believe they will be serious market share competitors. (Market share figures for Intel servers is shown in Table 1 and Graph 1.) In general, the Intel server market is growing, and these products will satisfy pent-up demand, but we do not expect the volumes to be significant (when compared to four-way servers) until next year.


Dell is positioning the PowerEdge 8450 series to address business-critical applications in three key market areas:

1.

Compute- and memory-intense applications (e.g. large databases)
2.

Enterprise messaging (e.g. MS Exchange)
3.

Multi-user Windows NT applications
4.

Server consolidation and scalable enterprise computing

Clearly, Dell is focusing on large enterprises and enterprise applications. In addition to its aggressive and focused growth in the last three years, Dell is now moving toward being a complete solution provider. This is evident from the recent deal with IBM Global Services for customer service and support, as well as the recent contract where the PowerEdge servers will run Sun's Solaris operating system.

Although the eight-way servers (in general) are now the most powerful Intel servers available, this position is expected to last only until Merced/McKinley arrive 12 months from now for Merced (80% probability), two years for McKinley (60% probability). Since McKinley, not Merced, is expected to provide the performance leap, this should give the current eight-way servers approximately 18-24 months at the top of the Intel scale. After that, these systems become "mid-range" products. Until Merced ships, we expect the market size for eight-way servers to be approximately $5-$8 Billion. (Note: Merced will not immediately "cannibalize" the market for eight-way servers, because of the change from the current IA-32 architecture to Merced's IA-64 architecture. This change will effect much more than hardware, and therefore migration will not be immediate.)

Product Strengths

Performance and Price/Performance: Dell's record of excellent price/performance is expected to continue with the PE 8450. We expect the only close competition to be from Compaq ($18.46/tpmC for the ProLiant 8500). We also expect Dell to meet or exceed Compaq's mark of 40,368 tpmC (also for the ProLiant 8500) within three months.

I/O Feature Set: Dell has provided PCI Hot Plug for Windows NT 4.0, correcting a deficiency present in the PowerEdge 6300/6350. In addition, the PE 8450 has four 66 MHz PCI slots, more than the two available from Compaq.

Serviceability: The PowerEdge 8450 has tool-free access to, and removal of, all the key components about which a customer (or service technician) would care: power supplies, fans, disk drives, PCI slots. These features exceed those available in the Intel design, but Compaq's ProLiant 8X00 holds a slight edge here.

General:

Dell's customer satisfaction is very high, we expect that will continue.

Product Challenges

Incomplete OS Support: Until the end of October, the 8450 will only support the various versions of Windows NT, in contrast to the competition which offers Novell NetWare, SCO UNIX/UnixWare, et al. (Solaris support will be available in late October, NetWare is scheduled to be available in late November.) Windows NT is not yet considered a robust enterprise-class OS, so the lack of alternatives is a deficiency. Dell will certify other OSes (through its DellPlus group), but this is a less compelling message than having those OSes installed in a production environment.

AC Voltage Support: The PowerEdge 8450 requires 208VAC, unlike Compaq's ProLiant 8000 and 8500, which can run off either 110 VAC or 208 VAC. Although 208VAC is required for either system (Compaq or Dell) to run a fully-loaded server, having the option is a modest benefit.

Redundancy: Although Dell's system provides redundancy in most subsystems (fans, power supplies, etc.), there are a couple of areas where there could be improvement. In the power subsystem, Dell has redundant power supplies, while Compaq has redundant Voltage Regulator Modules (VRMs) in addition to the redundant system supplies. (Note that redundant supplies have become a requirement not an added feature for enterprise class systems.) For cooling: the system is characterized by Dell as N+1 redundant, that is five fans plus one backup fan. Customers may prefer 2N redundancy, for a broader safety margin.

General:

Technology: As mentioned earlier, Dell purchases the board set directly from Intel. Given that, the logical conclusion would be that Intel would be able to release this system for shipment before Compaq could ship the ProLiant 8X00. However, this has not happened, due to the current problems with some of the Xeon 550 MHz processors and how they work with the Saber board set (See TEC's News Analysis: "Flaw in Intel Xeon 550 Chip Set: Shipments Stopped" September 29th, 1999). This is not Dell's fault, and it will not be a long-term problem, but the situation bears scrutiny nonetheless.

Vendor Recommendations

Although Dell has made its name in the Windows NT market, it should consider offering more than just one factory-installed operating system. UNIX is not dead, and the acquisition of ConvergeNet (with its ability to operate SANs in heterogeneous OS environments) should lead Dell into more than just NT. The ConvergeNet acquisition also allows Dell to provide a more flexible SAN solution than it had previously, and Dell should market this aspect aggressively. Using the DellPlus organization to provide installation and support for other OSes is a way of addressing this, but customers may prefer a factory installed solution.

Dell should use its clout with Intel to get power supplies that operate in either voltage range (110/208 VAC), since this puts Dell at a disadvantage relative to Compaq and IBM.

User Recommendations

The PowerEdge 8450 is a good choice for those clients who have high-end computing environments, such as data warehouses or server consolidation, and users who need high performance computing plus the flexibility of mixing and matching components in a rack. The feature set and hardware reliability features are good, and the only technology concern is based in the Profusion chipset, due to its newness. However, Dell's use of an Intel-designed board set should reduce Profusion-related concerns, after the current Xeon/Saber problems are resolved.

The limited OS offerings should be used as leverage, especially by customers who need something other than Windows NT.


SOURCE:
http://www.technologyevaluation.com/research/articles/dell-s-8-cpu-intel-servers-increasing-its-enterprise-focus-15171/

IBM’s Newest NUMA-Q Server to Handle 64 Intel CPUs

|

May 24, 2000 [Source: IBM] - IBM introduced the world's most powerful Intel-based server, the 64-processor NUMA-Q E410, along with the industry's most affordable technology-leading two-way server, the Netfinity 3500 M20. These products represent the high-end and the low-end of the industry's most scalable Intel-based server line for e-businesses running Windows 2000 and Linux environments.

Powered by Intel's new 700 MHz Pentium III Xeon processors, the NUMA-Q E410 has shattered the industry's foremost data warehousing performance benchmark doubling the result of Hewlett Packard's top-of-the-line V-series server at nearly half the cost. TPC-H results may be viewed by visiting their web site at: http://www.tpc.org/New_Result/TPCH_Results.html.

"IBM is enabling customers to build intelligent infrastructures on their own terms with UNIX, Linux or Windows 2000 application environments," said Rod Adkins, general manager, IBM Web Servers. "Our NUMACenter framework allows customers to seamlessly manage IBM's entire line of Intel-based servers with upward integration into higher level management infrastructures."

Key Features of the NUMA-Q E410

High Performance and Scalability: NUMA-Q systems scale from 4 to 64 processors and 64 GB memory in a single system, far exceeding any other Intel-based system on the market. NUMA-Q near linear scalability is enabled by its unique four-processor "quad" building block architecture, which allows customers to add balanced I/O and memory as they add processors.

High Availability: NUMA-Q's "mainframe style" multi-path I/O and switched fabric fiber channel SAN (Storage Area Network) capabilities provide a platform with no single point of subsystem failure. Further enhancing availability is connectivity with IBM Enterprise Storage Server featuring multi-port capability, which maximizes I/O by evenly distributing it over all available interface ports for maximum bandwidth. IBM backs NUMA-Q's outstanding availability with the option of aggressive, customer-specific service level agreements. Working closely with IBM, customers have achieved sustained availability ratings of 99.999%.

Investment Protection: The NUMA-Q architecture allows customers to fully leverage their IT investments while taking advantage of the latest technology. NUMA-Q E410 quads are compatible with all existing NUMA-Q servers which support multiple generations of Intel processors in a single system.

NUMACenter: NUMACenter is a pre-integrated environment combining Netfinity application and web servers running Windows 2000 or Linux and a NUMA-Q database server with a consolidated SAN and systems management including Tivoli software and the Advanced Detection Availability Manager (ADAM). NUMACenter is ideal for rapid deployment and growth of enterprise applications and is widely used by application service providers and e-businesses requiring a highly scalable and flexible compute environment.

NUMA-Q E410 began shipping on May 22 with an entry price of $69,000.

Market Impact

The projected market impact is a tough call on this one. The performance on the TPC-H scale is excellent, and the uptime figure of 99.999% is pretty good (equating to approximately 5 minutes of downtime per year). The uptime is superior to Windows NT-based systems, and the performance is better than HP's Unix offerings (at least, of the ones tested to TPC-H). In a more mainstream package, these figures would help IBM/Sequent (IBM purchased Sequent for their NUMA-Q offerings/architecture) gain sales volume and market share. One caveat: the "five 9s" uptime comment in the press release mentions customers "working closely with IBM" to achieve these figures. This implies that the "average Joe" will not necessarily achieve those results. IBM should provide figures from customers who haven't had their hands held so much.

An area of concern for us is IBM's entire NUMA-Q strategy. IBM appears to be tentative regarding where in the organization the NUMA-Q architecture and systems should reside. Presently they are under the aegis of the Web Server group, run by GM Rodney Adkins out of Austin. However, IBM's announcement clearly played up that they consider NUMA-Q to be an extension of the Intel-based Netfinity server line, run by GM John Callies. In many companies, where a product "resides" is often a case of: in whom the "Executive Committee" prefers to place their trust. We believe that both Mr. Adkins and Mr. Callies are doing excellent jobs leading their respective groups, so the management-quality issue is not a factor. This leaves it to the more prosaic criterion of architectural "fit". Although we can see how the high-end nature of the E410 might lean it toward the Web Server side, we believe that the Intel nature ultimately tips it toward the Netfinity side - especially as Intel processors and servers improve performance and robustness.

User Recommendations

As with many high-end machines, this product is not for small shops. Although the base price for the server starts at $69,000, it will cost significantly more than that - upwards of $250K - to build a useful system. ("Useful" here means able to utilize the power of the computer, as opposed to buying an underpowered unit for the name.)

The other area of concern is the OS: the performance figures quoted were reached using the "DYNIX/ptx" operating system, the OS developed by Sequent. That would be great, except that IBM has made it clear that DYNIX/ptx will go away within five years - we believe it will be closer to two years - to be replaced by Linux/Monterey. Because of this, we would like to see performance figures for a non-dead-end OS. Until IBM provides more useful figures, we cannot recommend this system for non-legacy applications. However, once the non-dying OS performance figures are available, there will be no strong reason for customers not to consider this system. As with all systems, customers should review their needs and compare them to the benefits such a system will provide.


SOURCE:
http://www.technologyevaluation.com/research/articles/ibm-s-newest-numa-q-server-to-handle-64-intel-cpus-15878/

Intel 820 Chipset Delays Again, Again, Again…

|

Timna, Intel's first so-called "Smart Integration" microprocessor, will be delayed until early 2001.

The delay comes in the heels of related problems with Intel's 820 chipset. Most recently, Intel announced a recall of 820-based motherboards using a memory translator hub (MTH) to connect SDRAM memory. [See also Intel Faces 820 Chipset Problems (Again).] The Timna delay is related to the MTH problems - an Intel spokesman announced that the chipmaker would design an all-new MTH.

Separately, Intel has indicated that it will no longer support SDRAM for the 820 chipset, recommending the 815 chipset instead. [TEC had predicted this earlier in the year.]

Market Impact

Timna was originally scheduled for late 2000 release. It was conceived as a computer-on-a-chip, integrating processing, I/O, audio, video, and edge connections. Obviously, even Intel can't get everything on one chip, not yet, anyway.

The market winner here is AMD, whose Duron value-line processor is already available and shipping. This means that Intel's current value PC strategy - Celeron processors - will remain in place until next year.

User Recommendations

Expect even deeper discounts that usual at year-end. It's still a competitive market, with Duron and Celeron battling for the low end. The looming prospect of an early 2001 Timna release, combined with yearend inventory pressures, should depress prices in this segment at least 5% by year end (80% likelihood).



SOURCE:
http://www.technologyevaluation.com/research/articles/intel-820-chipset-delays-again-again-again-15918/

Compaq's Alpha - Moving Toward Its Omega?

|

The Alpha RISC processor was developed in the early 1990's to provide a generational leap in CPU (and system) performance. In addition to the higher performance of the chip itself, Alpha is based on a 64-bit architecture, providing (according to Digital/Compaq) far superior performance to the 16-bit and 32-bit architectures in use by the Intel x86 architectures. The other strength is Alpha's ability to run multiple OSes: Windows NT, Unix (Compaq's "Tru64 Unix"), Linux, and VMS. Although Alpha-based products exist in multiple market spaces, their greatest success so far has been in two areas: high performance workstations and high-end servers. Alpha's cost structure has not allowed it to compete effectively in the low-end (PCs, etc.) marketplace. The present competition has been any Intel-based server and workstation, as well as workstations from Sun Microsystems (Unix), and Unix-based servers.

*

Workstations: Sun (52% of Unix mkt./0% of Win mkt),HP (16%U/23%W), Dell (0%U, 23%W)
*

Intel-based servers: Compaq ProLiant (30+% of Intel market), Dell(~15%), HP, IBM(~10% ea.)
*

Unix server competition: Sun, HP, IBM

Sales for Alpha are increasing slightly (year-over-year). Alpha-based products are about 7% of the market (NT server and workstation volume) and about 7% of the Unix market [Source: Compaq]. Alpha is unlikely to overtake any other CPU (that it hasn't already passed) in market share.

Product Strengths

*

Power: Alpha-based products are strongest in compute-intense applications, such as computer-aided design (CAD) and very large database applications. Its SPECfp scores have been consistently superior (50%-150% higher) to the nearest Intel-architecture processor. [CAD applications often require this power, and the difference is evident to users.]
*

Customer Satisfaction: Surveys of the "mid-range" market indicate customers are very happy with Alpha running Unix, its main shortcoming being lack of application software (relative to Sun and HP).
*

Flexibility: Alpha products are able to run any of four OSes - Windows NT, Unix ("Tru64 Unix"), Linux, and VMS. This flexibility theoretically allows Compaq to sell Alpha to companies with a multiple-OS environment. However, since Compaq VMS system sales are essentially flat, and Alpha/NT is not presently a significant player, this flexibility is losing its value.
*

Reliability: In its Unix and VMS implementations, high-end Alpha products are often used in 24x7 environments. Alpha's power combined with system reliability provides an advantage over Windows NT products for critical applications. (Note: Windows products are not noted for their reliability, and are not generally used in critical-application situations.)

Compaq can strengthen its revenue stream in a few ways:

1.

Build on its 64-bit Unix performance advantage
2.

Make a concerted effort to get Win64 apps available ASAP
3.

Reduce product cost where possible/feasible

Current alliance partners include Intel for chip manufacturing and Microsoft for Windows NT development. It should be noted that the Microsoft relationship has yielded little result for Alpha NT.

Product Challenges

*

Cost: Alpha products have been expensive, and their entry-level cost has been high. This has made Alpha unsuitable for low-end products. In addition, Intel is catching up on "raw performance", so Compaq may no longer be able to charge a premium for having the highest numbers on the TPC-C scale. Additionally, the $/tpmC figures for Alpha, although improving in recent test results, still are only competitive in the Unix market, not Windows NT - and only in certain performance bands (~25K tpmC).
*

Few true 64-bit applications: Although Compaq/Digital has a fair amount of 64-bit Unix applications, there are few/none for Windows NT, due to Win64 apps not being generally available. Alpha's performance on 32-bit apps is often no better than Intel, especially when the applications are not "Alpha-native". Software Development Kits for Win64 were released around December '98, so robust applications should not be expected until around December '99 at the earliest. Microsoft also does not appear to be making a significant commitment to NT/Alpha - releasing Alpha apps simultaneously with Intel apps provides small advantage to Alpha.
*

Merced/McKinley: When Intel finally ships Merced, Alpha's primary architectural advantage over Intel will be significantly reduced, if not disappear altogether. Even though it is likely that Merced (and its follow-on, McKinley) will be expensive - perhaps as costly as Alpha - and that Merced's performance will be lower than Alpha in 2000, its presence will draw customers away from Alpha.

Vendor Recommendations

*

Reduce Cost: Alpha's current price/performance ratio ($/tpmC) is typically higher than Intel and HP/Sun. Compaq should make a concerted effort to cost-reduce the Alpha product set, thereby improving this figure. Although some of the needed cost-reduction can be accomplished by improving manufacturing efficiencies, there may be "structural" redesign required to make significant improvements. Cost reduction will also allow better market penetration in the mid-range segment. In addition, if Compaq is unwilling to take the necessary steps to make Alpha/NT cost-competitive, then Alpha should exit the NT space. Its new DS10 is an attempt to be low-cost, but it appears geared toward Unix.
*

Take advantage of Merced's delay(s): To build market share before Merced becomes a reality, Compaq should make a concerted effort to get Win64 applications up-and-running on Alpha. If significant Win64 apps are not available and running until just before Merced starts shipping, the less likely it is that non-legacy customers will choose Alpha. Granted, optimized compilers/apps will take awhile, but having Merced there is a powerful deterrent to Alpha. In addition, if Unix on Merced becomes reality (vs. vaporware), it will be another nail in Alpha's coffin.
*

Play into Alpha's strengths - performance and Unix: Build up the number of robust 64-bit Unix apps - pay ISVs to develop apps - and sell it aggressively. Improve the performance numbers, Sun and HP are catching up. Compaq has recently announced a $100 million Unix-on-Alpha campaign. If this is successful, it may make Alpha a more serious contender. If it fails, Alpha probably will, too.

User Recommendations

*

Alpha servers - Unix only: Compaq has not truly committed to having Alpha as a market presence for NT environments. Until it decides to do so, and until a 64-bit version of NT is available, customers are better served with Intel-based systems for NT.
*

Alpha servers for mission-critical applications: Because of Alpha's solid system design and reliability, Alpha Unix servers should be always be considered for critical applications, such as 24x7 operation of a factory floor, or powering eBay (or a similar E-commerce company). In addition, Unix shipments went up in 1998, so its demise at the hands of NT does not appear to be imminent.
*

Alpha for high-performance workstations and servers: Alpha workstation performance is excellent, consistently winning AIM benchmark awards. Users should consider this for scientific/technical/engineering applications. Please note that this recommendation is only for "top tier" applications requiring lots of compute power. (In other words, running AutoCAD on an Alpha workstation is not cost-effective.)

Long Term Outlook

Digital (and now Compaq) squandered whatever lead/advantage Alpha had over the Intel architecture. Alpha is now, and should continue to be, at most a niche player. Compaq must now expend considerable effort if it wants Alpha to be more than that. Unless significant inroads are made before Merced and McKinley ship, Alpha risks becoming another flashy-but-dying technology.

With the exception of legacy VMS systems, and a relatively small number of NT systems, Alpha seems destined to become a Unix-only system. Although Unix is not dead/dying, NT has passed it in quantity of licenses. Alpha will survive at least another three years (90% survival likelihood), but its prospects diminish after that (30% survival likelihood after five years), unless Compaq can turn things around in the next 12 months. Users should take these factors into consideration when deciding on a technology platform.


SOURCE:
http://www.technologyevaluation.com/research/articles/compaq-s-alpha-moving-toward-its-omega-15175/

Gateway Drops AMD

|

September 20, 1999 - Gateway Inc. revealed that it plans to stop purchasing microprocessors from Advanced Micro Devices, Inc., after current models are replaced. Gateway will now use only processors manufactured by Intel Corporation. Gateway presently holds the #4 position in U.S. sales volume of PCs. AMD holds the #2 position in U.S. sales volume of microprocessors.

Market Impact

This announcement marks Intel's continued gains in the consumer PC marketplace, achieved primarily through price cuts on its processors, and will lead to faster market consolidation. Gateway's decision will have a negative effect on AMD, reducing both sales volumes and profits. In addition, if AMD is to continue to challenge Intel's dominance, it cannot afford to lose vendors of this size (845K systems shipped in Q2 19991).

Although this announcement had its most immediate effect on the lower-end PC market, it also means that the Athlon, AMD's high-performing CPU, will also lose a valuable sales outlet at a time when it is trying to make a big push. Since Gateway owned the high-end (>$1500) home PC market in 1998 (19.9% market share vs. 15.8% for Compaq and 10.9% for Dell2), this is potentially a major loss. (Presently, Gateway accounts for an estimated 5% of AMD's sales volume). In addition, this compounds AMD's cash flow problems operating losses in the last two quarters, combined with a drop in both processor sales volume and average selling price. AMD will need to respond aggressively to maintain its long-term viability as an Intel alternative.

User Recommendations

In the short term, the effect on the user is generally neutral-to-positive: pricing will stay the same, perhaps even drop slightly, on Gateway products. Although some AMD chips perform better than some Intel chips, most customers will not notice the difference. Purchasing decisions need not be delayed in response to this announcement.

In the long term, this will have a negative effect for the user: Intel's price reductions were to fight AMD's increased market share. If AMD suffers enough defeats that it no longer is a serious challenger to Intel's dominance, eventually Intel may decide price reductions are no longer necessary to maintain market share.

1Source: IDC
2Source: Technology User Profile


SOURCE:
http://www.technologyevaluation.com/research/articles/gateway-drops-amd-15340/

Wintel Tries to “Embrace and Extend” the English Language

|

After more than a month of delays, Microsoft finally announced a stripped-down version of the Windows NT operating system designed to run on appliance servers for shared Net access in offices. Don't expect to pick up a shrink-wrapped copy of Windows for Express Networks (or WEN) at a local computer store. It will only be available loaded on appliance servers, starting with Intel's InBusiness Small Office Network.

Intel's products will be available in mid-March, after some last minute testing. The servers come in two models, starting at $1300 for a Celeron-366 system.

Both of Intel's new Small Office Network servers come only with a shared 56-kbps modem and do not offer support for a broadband connection, which could be important for a small office of as many as 25 people sharing Internet access.

Microsoft touts its WEN operating system as supporting a broadband Internet connection, but Intel representatives say WEN doesn't support the extra NIC card that would allow that kind of connection. Within the next few months, both companies will offer hardware and software upgrades to enable broadband on the appliances, according to an Intel spokesperson.

The $1300 InBusiness Small Office Network features a 366-MHz Celeron processor, 64MB of memory (upgraded from 32MB), a 13GB hard drive, 56-kbps modem, and an eight-port hub. The second model, the InBusiness Small Office Network Plus, runs on a Celeron-466, and comes with 64MB of memory, a 56-kbps modem, an eight-port hub, a 13GB hard drive, and an additional 13GB removable drive for mirroring. That unit is expected to carry a street price of $1675.

Do-It-Yourself Networking

Intel's appliance server boxes are aimed at small businesses with little or no technical support staff. With them, small offices can share files and printing, and can manage equipment remotely through a Web interface. The servers will let many as 25 networked computers share Internet access. If you don't have a technical staff or know-how, WEN guides you through the set-up process with Wizards. Microsoft also plans to license its operating system to other vendors, so more WEN-powered appliance servers will be coming to market.

Microsoft planned to unveil WEN nearly a month ago. Microsoft officials wouldn't comment on reasons for the delay, but cited general "complications with Intel's manufacturing approval process." More specifically, one of Intel's server appliance models apparently didn't meet the hardware demands of the WEN operating system. So Intel upgraded the memory on its lower-end model from 32MB to 64MB. It's the additional testing for this modification that has further delayed the product's release, Intel representatives say.

Market Impact

First, let's get one thing straight: this is not an appliance (as the term is generally understood by the market), it is a small server masquerading as an appliance. [Note: For the uninitiated, the term "embrace and extend" is commonly used to refer to Microsoft's practice of taking a standard technology- Java, Kerberos, etc. - and modifying its functionality so that the non-Microsoft versions of the same standard are no longer compatible with Microsoft's version. This has its greatest effect in those markets where Microsoft is dominant.]

Wintel is now trying to jump on the server appliance bandwagon, albeit late. Intel has been thinking about selling servers under its own badge for about ten years. Until now, it has had limited success, preferring to sell through some systems companies. The Small Office Network (SON - is that a shot at Sun?) server resembles the Whistle InterJetII small server (now an IBM company and product), but is somewhat limited by comparison - try supporting a 25-person office Internet on a 56K line, see who screams first. Intel and Microsoft claim broadband will be available very soon.

We expect the SON will garner market share simply because it is Wintel, and "no one ever got fired for buying Wintel". Intel has focused on simplifying setup/installation of the server, and is targeting it at the geek-free small office. The product specs are not especially impressive - for about $100 more, the Cobalt Qube2 supports more users, runs a broadband connection, and has product history (i.e., has been in the market for over six months, versus the not-yet-shipping status of the SON). The Qube runs on Linux, which Windows-adherents may spurn, but it also supports multi-OS desktop environments, including Windows 9x and MAC OS.

The new Windows for Express Networks (WEN), although a "stripped-down" version of NT, still requires 64MB of RAM in a base configuration. Users can draw their own conclusions regarding how streamlined the OS really is. We also question the wisdom of having the product bounded at 64MB. In addition, we note that Windows refers to these appliances as "Windows Powered", which was going to be the new name for Windows CE. Is this Windows CE under a different name? No, but it certainly gets confusing after awhile.

Finally, this announcement points up that Microsoft is increasingly trying to control the hardware market. This trend started overtly with the Server Design Guides, which are more-or-less followed by the major Wintel server vendors. The difference here is that Microsoft has chosen to go straight to Intel first - is this a harbinger of future strong-arming of manufacturers?

User Recommendations

This is of interest primarily because of the participants. The feature set is middle-of-the-road, no matter how many new names one tries to give it. The price is certainly attractive, but there are offerings from other vendors priced in the same ballpark.

Because of the broadband issues, we suggest users wait until those issues are solved. Running a 25-person office on a 56K modem is not an effective strategy. Users also need to get details on the features, such as: what is the firewall? What kind of performance guarantees? Will Intel buy back the old server when I need to upgrade to something beefier since the system is sealed and not upgradeable?

But if you have $1500-$1700 extra lying around, you might want to pick one of these up early to try it out .



SOURCE:
http://www.technologyevaluation.com/research/articles/wintel-tries-to-embrace-and-extend-the-english-language-15619/

Turmoil in CPU-Land

|

August 29, 2000 - Intel Corp. will recall its 1.13GHz Pentium III chip. Intel officials said the company is recalling the chip due to a problem that could cause certain applications to freeze.

"We found some marginality in the part within certain temperatures within the operating range and certain code sequences (in applications)," said spokesman George Alfs. "We're not happy with the chip and we're going to pull it back."

Only some of the 1.13GHz chips showed the problem, according to the chipmaker. However, the company will recall all 1.13MHz Pentium III processors that have shipped to date. Presently, IBM has shipped the chip, but no other US PC manufacturers have. Dell Computer was alerted to the problem before any of their systems were shipped to customers.

August 30, 2000 [Source: AMD press release]-- AMD announced today that Larry Hollatz, group vice president of the company's Computation Products Group, has resigned to pursue other interests. Effective immediately, Hector de J. Ruiz, AMD president and chief operating officer, will serve as acting group vice president for the business unit which produces PC processors.

Market Impact

First, the Intel problem. The magnitude of the chip recall is nothing like Firestone's recent tire recall (relatively few of these CPUs have made it to customers yet), and the furor will die down before long. But, this latest problem, combined with other recent issues (ref. TEC NA Should It Be Renamed 'Unobtainium'? ), indicate to us that there are still underlying problems in the company. We have hinted before that CEO Craig Barrett might want to look into the situation more closely; we reiterate that suggestion. Frankly, we'd be a little surprised if he hadn't already started "kickin' some tail". All of this provides yet another opportunity for AMD to succeed at Intel's expense.

For some time, AMD has been matching Intel shot-for-shot, and in some cases winning (relative to CPU performance). This has elevated AMD from its former "me-too" status to a serious contender in the Intel-architecture market. We don't envision AMD surpassing Intel any time soon (especially because of AMD's non-presence in the server market), but this stuff does make life interesting.

For AMD, we envision a slight loss of momentum (as generally happens when a key person leaves), but we do not presently believe the long-term effects will hurt AMD greatly.

User Recommendations

The normal suggestion would be to check your PC to see if it's using the offending CPU. Since only IBM (of the major vendors) has shipped any systems, and since they seem to have the situation well in hand, users are running a low risk of problem(s).

For the longer term, users should keep a watchful eye on both Intel and AMD, though for different reasons. Intel watchers should see if the missteps continue; if they do, customers should give serious consideration to switching to AMD. A key indicator: if Dell Computer decides to add AMD processors to their product line. Dell has been the most steadfast of Intel's supporters thus far; their "defection" would spell trouble. (No, we have no evidence of this happening, we merely mention it as an indicator.)

AMD will be more difficult to assess, because the effect of a personnel change generally takes a lot longer to become apparent. Additionally, momentum can carry a company for awhile. If we see significant attrition (such as key designers, etc.), this will be a bad sign. If there is no apparent loss of momentum over the next 3-6 months, then AMD will probably weather the change well.


SOURCE:
http://www.technologyevaluation.com/research/articles/turmoil-in-cpu-land-16111/

Should It Be Renamed 'Unobtainium'?

|

Intel executives have said in a financial briefing during July that Intel will push back the release of its 64-bit Itanium processor (formerly known as "Merced"), recently slated for October, 2000, by a quarter. This means the first Itanium-based products should not arrive until early in 2001 - somewhat later than the original schedule of 1999. The first products are expected to be servers and workstations.

Market Impact

This is not really a big surprise - we've known systems engineers/designers who, after an Intel rep discloses what the planned ship date is, ask "OK, but when's it really going to ship?" - or they add a 9-12 months to the Intel date.

The turn of events benefits at least three companies and one group: AMD, Compaq, Microsoft and the Trillian/Linux community.

AMD gets some more breathing room for their 64-bit "Sledgehammer" CPU. Although Intel has the lead in mindshare for the 64-bit market (excluding the Unix/Solaris/etc. markets), vulnerabilities have been appearing in recent months. AMD's design capabilities have improved (witness Athlon performance), so Intel domination is no longer a "slam dunk".

Compaq's Alpha chip gets a little more life, although we believe this is just delaying its likely demise. Alpha can still be saved from the "great technology no longer produced" heap, but it will take a strong, concerted effort by Compaq, and it will mean focusing on volume platforms, not the high-end machines such as Wildfire. (Compaq has indicated they still want to keep Alpha alive.) The AlphaServer DS10L is intriguing at 1U (one rack unit), but the available performance figures are unimpressive, given the Alpha's purported power.

Microsoft gains in a different way: 64-bit Windows (W64) is not ready, so the Itanium delay helps reduce the gap between hardware availability and OS/application availability. Although Microsoft does not have a reputation for delivering new OSes on time (at least, not to the "original" schedule), a big gap might provide the Linux community with even more ammunition than it currently has.

Trillian/Linux gains a little more time for developers to produce 64-bit applications. Although they started their 64-bit race a little later than Microsoft, it is still not clear who will get there first with usable applications and OS.

If enough additional delays occur, people may start to question the value of an Itanium release, and may decide to wait until "McKinley", the Itanium/Merced follow-on (presently) scheduled for release in 2001. Unfortunately for some manufacturers - those who are key Intel partners - they are pretty much locked into Itanium.

User Recommendations

Users waiting for 64-bit Intel architecture will need to wait at least a few months longer. If they are committed to Intel-only, then they will just have to put up with the delay.

Those users who want 64-bit power, but have no special loyalty to Intel (or who have no platform preference), should review the other offerings, such as those from Compaq (Alpha only), Sun, and HP (PA-RISC).

Those users who are content with 32-bit computing (gasp!) can keep on using current systems, and wait for the dust to settle.


SOURCE:
http://www.technologyevaluation.com/research/articles/should-it-be-renamed-unobtainium-16000/

Does Microsoft Have Something Against 64-Bit Processors?

|

February 16, 2000 [Sm@rt Reseller & ZDNet News]
Linux: Itanium's Great 64-Bit Hope?
Microsoft now is expecting Whistler Beta 2 to be its first IA-64 offering.

Through a strange set of converging circumstances, Linux could end up as the preeminent operating system for Intel's 64-bit Itanium chip, due in the second half of this year.

Microsoft is holding to the party line that it will have a 64-bit Windows release ready to ship once Intel officially releases Itanium. But according to an internal Microsoft memo, dated Jan. 1, 2000, viewed by Sm@rt Reseller, the software company expects to release Beta 2 of Whistler, its next version of Windows following Windows 2000, as its first Itanium offering. The final shipping version of Whistler isn't slated to arrive until March 2001, according to the memo.

"Windows needs to be available at Itanium launch. Our goal is to use Beta 2 as the product that fulfills this requirement," said the author of the Microsoft memo, distributed internally to its Windows development team.

The memo, along with this week's spat between Intel and Sun Microsystems Inc. over Solaris for Itanium, leaves Linux looking like it may become the most viable operating system for Itanium, when the chip ships in the third quarter of this year.

No Preordained Plan

The way the Itanium OS story is unfolding is more a result of market forces than any preordained plan on Intel's part.

Intel officials at its semi-annual Intel Developer Forum in Palm Springs this week said they were unaware of any change in Microsoft's plans to deliver simultaneously with Itanium a final, shipping 64-bit version of Windows.

Microsoft "has committed publicly to have 64-bit Windows at [the Itanium] launch," said a skeptical Michael Pope, director of the enterprise program inside Intel's Enterprise Server Group. When Itanium ships, "We will have a production level version of Windows 2000 64-bit," he said. "At a minimum, it will be Windows 2000 as it is today. The question is, how many features [from Whistler] will they add."

Microsoft delivered, more than a year ago, a software developers kit with tools and documentation to help developers make their applications 64-bit-compilable. However, not even an alpha version of 64-bit Windows exists, Microsoft officials confirm.

This week has not been a red letter one for Itanium OS support. The week began with a public spat between Intel and Sun Microsystems in which Intel is claiming Sun is dragging its feet in developing a version of Solaris optimized for the IA-64 processor.

"What we have seen over this year, is a pattern of a lot more talk than action [from Sun]. That's not what we signed up for," said Paul Otellini, executive vice president and general manager of Intel's Intel Architecture Business Group.

Linux To The Rescue?

Sun and Microsoft aren't the only major OS working feverishly to deliver 64-bit offerings simultaneously with Intel's Itanium. But of all of these offerings, only 64-bit Linux is at the beta testing stage at this point. The others are in alpha or pre-alpha.

In early February, the Trillian Project--a group consisting of Caldera, CERN, Hewlett Packard, IBM, Intel, Red Hat, SGI, SuSE, TurboLinux and VA Linux--released its first beta of a version of Linux optimized for Itanium. This week at the Intel Developers Forum, IBM announced that the Project Monterey team (IBM, the Santa Cruz Operation and Intel) will have an alpha version of Monterey ready to deliver to developers on Feb. 29.

Major operating systems typically go through at least a year of rigorous beta testing before they are released commercially. Given this timetable, it's looking increasingly doubtful that the big OS vendors, like Microsoft, IBM and Sun will have enough time to test, finalize and productize their OSes so they can ship simultaneously with Itanium.

Microsoft declined to comment on the particulars of its 64-bit OS delivery plans or on the contents of the Jan. 1 memo.

Market Impact

In theory, this will give Linux a modest shot in the arm - probably not enough to push it past Windows's market share, but enough to gain some more market share points. What will be interesting to see is how Intel and Microsoft respond to the likelihood of Win64 (64-bit Windows) being late to the gate. When it was only the Alpha processor (and Digital/Compaq) that would be hurt by Microsoft taking its time, Redmond could afford to take its time without worrying about market share issues. With Itanium (formerly Merced), the delay becomes more serious. We can envision any number of scenarios (and their likelihood):

1. MS institutes a crash program to get Win64 ready in time for Itanium release Microsoft will probably increase its efforts to get the product out on time. However, we don't foresee them pulling out all the stops, just to avoid missing the Itanium release. [30% probability]

2. Intel delays Itanium by a couple of months, to allow MS to catch up a little Although this makes interesting fodder for "Wintel conspiracy" freaks, there is minimal benefit to Intel in doing this. (In the short term; in the long-term, Redmond might consider exacting a penalty for Intel's lack of "assistance".) In addition, Intel has shown an increasing willingness to work closely with Linux factions. Of course, we are tactfully ignoring the possibility that Intel will miss its delivery date anyway. [Less than 10% probability]

3. Microsoft does the "crash program" thing, but accepts a three-month lag between Itanium and Win64 Although this is unappealing to Redmond, we think it is the most likely scenario. Companies committed to Itanium and Windows will put up with the wait. The risk lies in companies wavering between the two OSes, but committed to Intel processing. We think the amount of attrition resulting from any delay will be relatively small when compared to other reasons for attrition. In addition, Microsoft could always try to use the "It's more important that we have it for McKinley (due for release in 2001, expected to be significantly more powerful than Itanium) than for Itanium" excuse. [60% probability]

User Recommendations

Users won't be able to try Linux-on-Itanium for another six months or so. Those wishing to try it may want to install Linux on their current Intel-based hardware, to ensure that they feel comfortable with it beforehand. Potential users may also want to speak with someone who already uses Linux-on-Alpha, to see if there are significant/sufficient performance gains. (Bear in mind that this is not an apples-to-apples comparison.)

The theoretical performance gains to be had from the move to 64-bit hardware (including Alpha) are large, but only if the application is also 64-bit. Digital/Compaq learned the hard way that 32-bit apps running on 64-bit hardware do not necessarily run faster. Users should exercise appropriate caution regarding promises made in anticipation of Itanium, relative to performance boosts.

One other side issue: Itanium (unlike AMD's Sledgehammer 64-bit processor) is not strictly compatible with the current x86 architecture. This will further muddy the 64-bit waters. Although Itanium will ship before Sledgehammer, this is no longer a guarantee of market share.



SOURCE:
http://www.technologyevaluation.com/research/articles/does-microsoft-have-something-against-64-bit-processors-15581/

HP Joins the Athlon Pile-On

|

While Gateway Inc.'s recent rollout of new PCs powered by Advanced Micro Devices Inc.'s Athlon processor drew widespread attention, Hewlett-Packard Co. was quietly stocking the shelves at Sam's Club discount stores with two new desktops that mark its first use of the chip.

HP's addition of the Athlon to its desktop systems means AMD, of Sunnyvale, Calif., now has its top processor, and the chief rival to Intel Corp.'s Pentium III, featured in systems made by four of the five largest PC makers in the United States. The lone holdout remains Dell Computer Corp.

"Certainly we're pleased that interest in Athlon continues to grow and we're attracting more strategic customers," said AMD spokesman Drew Prairie. "It signals a continuing trend that Athlon's performance message is getting out there and there's a strong demand for systems based on it, and OEMs are reacting to that demand."

For its part, HP downplayed the company's unannounced decision to add the Athlon to its Intel-dominated product line. "Really the choice of Athlon isn't any more in favor of performance than a Pentium III, it's just a pricing decision for these models," said Ray Aldrich, a spokesman for HP. HP would not comment on whether the PC maker would feature Athlon chips in any future products.

AMD also reported it grabbed a bigger slice of total processor sales by garnering 16.6 percent of the market, up from 12.6 percent the previous quarter and its highest level in a year. Intel's market share slipped from 83.7 percent in the third quarter to 82 percent for the final quarter of 1999.

Aside from overall strong demand for processors, AMD's return to profitability was also fueled by Athlon's strong showing in the more lucrative high-end consumer PC market. Before introducing the Athlon, AMD chips were mainly featured in low-end systems, where profit margins are much slimmer.

Market Impact

More good news for AMD: it keeps adding key vendors to their customer list, its profits are up, and it is becoming associated with high-end systems, not just the low end of the market.

More good news for the PC market in general: AMD may finally attain sufficient market share to keep Intel at bay. In our opinion, reasonable competition (i.e. where all/most of the competitors actually have a reasonable chance) is a good thing. AMD's increased strength improves the odds of reasonable competition. A side benefit relates to pricing - Intel's typical response when threatened has been to cut prices. Although PC prices have edged up recently, Intel may decide it is time for another round of price cuts.

Intel may be looking for the license plate of the truck that hit them. It recently lost its sole-source status at Gateway, a scant three months after getting AMD booted (See TEC News Analysis article: "Gateway, Jilted by Intel, Kisses and Makes Up with AMD" January 21st, 1999). In addition, Dell has publically stated that Intel's inability to meet demand is the cause of Dell's Q4 supply woes. Intel will certainly survive, but this will probably put enough fear into it to make some operational and management changes.

User Recommendations

This announcement will have only modest effect on the business user - it is aimed primarily at the consumer market (HP's Pavilion product line). However, the eventual effect will be to reduce prices on all CPUs: Intel because of its anticipated response to the AMD threat; AMD because its increased shipments will reduce chip cost, and it will probably respond in kind to any Intel price cut.

Users who have already planned to use (or at least consider) systems containing the Athlon CPU should be encouraged by this. It should give them enough psychological security (regarding AMD's viability) to allow them to proceed with Athlon purchases that may have been on hold.

This news will not affect those users totally committed to Intel, unless they decide the time is right to consider alternative CPUs.

SOURCE:
http://www.technologyevaluation.com/research/articles/hp-joins-the-athlon-pile-on-15305/

“It’s a Notebook!” “It’s a Paperweight!” “Wait - It’s Both!”

|

The product space being examined is Intel-based servers (still known as "PC servers" in some areas). In this note, we will examine how five key trends affect users, vendors, and the product(s).

In the Intel-based server market, there are five trends of note:

1. Market consolidation
2. Consolidation of servers within a company
3. Focused functionality, e.g., "server appliances"
4. Movement toward Linux/Linux vs. NT
5. Stratification/segmentation

Each of these trends addresses a different area of the server world and how the market views it:

* Market consolidation relates to the vendor landscape

* Server consolidation relates to the supply and demand of classes of server

* Focused functionality addresses the feature set customers can now expect as well as the partitioning of features/functionality

* Linux movement addresses the operating environment and architecture for servers;

* stratification/segmentation deals with the growing divide between high-end systems and low-end systems; also deals with the reduction of categories into which servers are classified.

Trends and Discussion

1) Market consolidation

General

"Market consolidation" is the name for the phenomenon where an ever-increasing share of a given market is owned by an ever-decreasing (or static) set of vendors. In the case of Intel-based servers, the four key vendors are the "Big Four" - Compaq, Dell, IBM, and HP. In the last two years, market share for the Big Four has gone from approximately 65% of the market to 75% or higher, representing a raw share growth of approximately 15%.

Users

For the user, this trend is a double-edged sword.

On one hand, this narrows the field of viable vendors from which to choose. In cases where customers had not pre-determined which vendor's products would be used, diversity (i.e., non-consolidation) left the field wide open, perhaps too wide open. (The lack of "definition" in the market meant that just about any vendor could stick a CPU or two in a chassis, add a SCSI or RAID controller and some RAM and disk drives and Presto! now there's a server!) Consolidation also means fiercer competition between the remaining vendors. Increased competition usually results in users benefits, either through reduced pricing as vendors fight to increase market share, or through increased functionality available.

On the other hand, users may now find their choices effectively limited, similar to network TV before the rise of cable. While the product selection available is adequate and suitable for the needs of most customers, there will always be some percentage whose needs go unmet or unsatisfied. For users to feel a severe impact, we believe the Big Four would have to become the Big Three or Big Two. Even with further consolidation, we expect each vendor to provide approximately ten separate models from which to choose. (Currently, HP provides 11, Dell and IBM provide ten, and Compaq provides 11, plus three more about to be retired.) At some point, though, vendors may decide that multiple alternatives is a losing strategy.

Vendors

As with users, market consolidation can be double-edged.

On the positive side, as consolidation continues, the companies left standing are reasonably strong, and need focus only on the remaining competitors. Even if some small players may develop a server with groundbreaking functionality or technology (10% probability), the technology will quickly be assimilated by one or more of the big players. The variant of this is the development of new markets (e.g., server appliances). This market was created by small players, and while some of the Big Four took longer than others to catch on, all of the them are now in the game.

On the downside, the competition between the major players will intensify. Before consolidation, big vendors could take advantage of their superior strength and perceived corporate viability to overwhelm the smaller players. With four strong players vying for the same IT dollars, the match is more even, thus more intense.

Product

Although consolidation does not guarantee product line reduction by the survivors, history indicates it is a likely outcome in the long term. An example is Compaq's reduction of ProLiant server models from 12 to nine. This reduction is primarily due to Compaq having too many models covering the same market "space". However, it is unlikely Compaq would feel as great a need to cull models if the competition (i.e., Dell) weren't breathing down its neck.

2) Server Consolidation

General

In general terms, "server consolidation" translates into "replace a large bunch of smaller servers with a smaller bunch of large servers". This is due to the confluence of a number of factors:

1. Processing power and density has increased Eight-CPU servers are now available from the Big Four, providing the ability to deliver far more transaction-processing power than was available from servers two-three years ago.

2. Computing requirements have increased As applications and operating systems have hogged more and more computing resources, a commensurate increase is needed in computing power. However, this power is not always needed, leaving resources for other applications to be migrated from smaller machines. In addition, company growth often forces IT Directors to upgrade their computing infrastructure incrementally.

3. Floor space is costly Despite the increase in computing power, customers - especially ones with large computer centers - are trying to stuff as much as they can into ever-smaller spaces. Since outward/sideways expansion does nothing to lower the per-square-foot cost, users prefer to expand upward. This leads to the use of large equipment racks; into which appropriately designed servers can be mounted.

4. Administration of large numbers of servers can be difficult It is much easier for a system administrator (sysadmin) to keep an eye on 10 eight-CPU systems than on 40 two-CPU systems, even with the ability to set alarms, guardbands, and other methods of monitoring and administration. In addition to the ability to manage the monitoring software more closely, being able to keep a number of servers in close proximity to each other also provides better visual feedback in the event of a fault.

User

For the user/customer, this means there is now the ability in some cases to move everything from a bunch of single-processor boxes into one box. Some companies have instituted trade-in policies to provide further incentives for customers. The benefits of consolidation are most appropriate for users who are trying to move a bunch of smaller applications (such as databases) into a centralized computing structure, or who have two or three larger applications that they want to run on one system.

The downside to server consolidation is the increased risk associated with having a single point of failure. Most high-end general purpose servers now come with redundancy and hot-swappability for power, fans, disks, and PCI cards such as network interface cards, but all that may still not be enough to prevent unwanted downtime. A typical Windows NT uptime "guarantee" is 99.5% - sounds impressive until you realize that translates into about 43 hours of downtime per year. While maintenance (one component of downtime) can often be scheduled, there will almost certainly still be failures.

Vendor

Server consolidation means vendors will need to continue focusing on delivering high performance, high value systems. For the reasons described above, a large number of customers will want to move away from their one- and two-CPU older servers. This will increase the already strong price pressure on small/workgroup servers. Vendors are more willing to take miniscule profit margins (typically 15%) on workgroup servers in exchange for the higher margins (typically 35+%) delivered by the high-end servers. In essence, manufacturers view the low-end servers as a necessary evil.

Product

As more applications get consolidated onto large servers, tuned performance will increase in importance. This does not mean each box will be individually "tuned". Rather, it means that manufacturers will increasingly review the application load and type anticipated by the user, and provide a "pre-packaged" system that addresses most of the cases expected. Dell is already providing server sizing tools on its website. These tools are for specific applications, such as Microsoft Exchange, SAP R/3, Oracle 8i, and Novell ICS, and provide customers with rough guidelines as to what their system might need for CPUs, memory, and storage. Because it is becoming more difficult for a particular server to be "all things to all people" we expect to see more of this sizing/tuning focus in the future.

3) Focused Functionality

General

The flip side of the more powerful general purpose (GP) server, such as those used for server consolidation, is the focused server, popularly referred to as the "server appliance". This segment grew out of a couple of needs:

1. The need to perform a few specific and focused tasks very well.
GP servers that are "tuned" to provide optimal all-around performance usually perform no task exceptionally well. A wise man once said "You can't optimize for more than one factor at a time". This holds true for servers as well - if you optimize for database retrieval, you are causing another factor (such as Web serving or caching) to degrade.
2. The unwillingness of users to pay for lots of extra/unneeded functionality.
This is the natural by-product of #1 - you need to perform Task A really really well, but you don't want to be forced to pay for the overhead/infrastructure required to perform Tasks C and D.

The rise of the Internet helped force the issue, making people focus on the above factors a lot more than they had previously. For example, companies were accustomed to using large GP servers to interface to the Web. When someone finally realized that you could provide specific, targeted functionality to enhance Web surfing by means of caching technology (storing commonly accessed functions or pages at a local site, rather than on the Web proper), it opened up an entirely new market segment. Server appliances, no longer limited to caching, are the fastest growing segment of the server market. Some estimates are in the 75% CAGR range, about triple the growth rate for the general purpose server market.

User

Users benefit from server appliances by paying only for the functionality they need. Most users who need/want server appliances will actually need a mix of GP servers and appliances. However, this will still save most users money because at least part of the unneeded functionality will no longer drain money. In addition, targeted appliances usually provide improved performance. In the case of Web caching, being able to load a page in three seconds instead of 30 means more productivity and less frustration at the desktop. (Yes, saving 27 seconds is not much of a productivity boost. The idea is that everyone saves that much time on most of their Web surfing [business-related, of course] through out the business day.)

Vendor

Most of the Big Four Intel server vendors have appliances available to customers. Implementations vary from loading software such as Novell's ICS onto an existing system (Dell) to OEMing someone's "hot box" and loading caching software onto it (IBM) to OEMing a unit without modification (Gateway) to developing their own system from scratch (Compaq and startups such as Cobalt and CacheFlow). Presently, it is unclear which strategy will yield the best long-term results.

Consolidation in this market has already started: Quantum bought Meridian, maker of the Snap! server, IBM bought Whistle Communications, maker of a small caching server. We expect some modest proliferation for the next year, followed by a longer period of consolidation.

Product

As mentioned earlier, server appliances made their biggest splash in the caching market. This is primarily due to the glamour associated with all things Webby. Other areas of significant growth include Web serving (lots of ISPs want tons of small servers to provide redundancy and high performance) and Network-Attached Storage (NAS). Currently, the NAS market is "owned" by Network Appliance, although there are other players such as Procom (recently involved in OEMing to Hewlett-Packard) and Auspex (who appears to be losing significant ground and whom we do not expect to survive independently).

4) Linux vs. NT

General

Few issues generate more heat and emotion than the operating system battles of the last three years. The server OS market used to be Unix (of various flavors), Novell NetWare (really a Network Operating System - NOS - more than a "base level" OS), and various proprietary OSes. The rise of Windows NT added a fourth leg to this market. As NT gained market share, as well as dominating the business desktop, people started seeking an alternative. The Unix market per se was too fragmented to provide a unified "defense" against the encroachment of NT. It appeared that NT would overcome the server OS space much as Windows had demolished other desktop OSes.

In the early 1990s, a then unknown Finnish student named Linus Torvalds "created" a free version of Unix, dubbed it Linux, and published the source code for the world to see. The idea was to have the programming world improve Linux a little bit at a time. "From these humble beginnings" as Bulwer-Lytton might write, sprang the OS that has now attained a #2 share in the server OS market for 1999. You can still get Linux for free, although most corporate entities choose to pay Red Hat, Turbo Linux, Caldera, or one of the other Linux distributors, in order to get documentation and customer support.

Windows NT is still too strong in corporations to be overtaken anytime soon. However, as more and more companies opt for Linux, NT's (or Windows 2000's) position will become more tenuous, and perhaps real competition will arise.

User

Corporate users should not choose Linux solely because it is "not Microsoft". Linux has a number of factors in its favor, such as scalability, robustness, and (if configured properly) security. Each situation where Linux is being considered must be evaluated on its own merits, and not just because Linux is the Next Big Thing.

A key concern of customers is the fragmentation of the Linux market, similar in some ways to that of Unix in general. We expect consolidation to start by the end of 2000. We also expect Red Hat (currently owning approximately 65% of the commercially-available Linux market share) to be one of the survivors - no surprise there. Although Corel's Linux distribution is supposed to be the easiest to load, plus able to run Windows applications, it is a desktop-oriented product, so we don't include it in our assessment.

Vendor

Unsurprisingly, hardware vendors waited until 1999 to provide Linux in their servers. So far Dell is the only vendor with enough courage to factory-install Linux on its servers, but we expect others to follow suit in 2000. (Note that a lot of the Linux-oriented statements to come from server vendors in 1999 were merely "We will support Linux on some/all of our models", without any commitment to factory-install it.)

Vendors will need to intensify their commitment to Linux, at least for the short term. We believe Linux will not go away in the next 1-2 years, so vendors should make the most of it, or cede that market to Dell and whomever else is willing to commit to Linux. Most hardware vendors have formed relationships with at least one or two Linux distributors. We expect 2000 will bring semi-exclusive alliances, similar in ways to the Dell/Intel hardware alliance. We expect those alliances to be driven as much from the software side as the hardware side.

Product

Presently there is no "one size fits all" distribution of Linux. As mentioned earlier, this perpetuates fragmentation in the marketplace, providing an opening for Microsoft to apply some more FUD. To combat the continuing arrows expected from Microsoft, at least one (and probably more) of the Linux distributors needs to bundle "missing" functionality into their version. Although this might be considered a parallel to the bundling that got Microsoft in trouble with the Department of Justice (DOJ), this can be thought of as a matter of long-term survival for the Linux vendors. Corporate IT managers will do what's best for their company, and if Linux does not provide as much needed functionality as Windows NT/2000, then the IT manager will stick with the safe bet of Microsoft.

5) Stratification/segmentation

General

By stratification, we mean the quasi-polarization of the server market into groups with a more tenuous connection than had existed in the past. Specific groups we see arising are the very-high-end servers, the mid-range workhorse servers, and the focused servers such as server appliances. Stratification is the by-product of two other trends: server consolidation and focused functionality. As each of these markets grow, it will be more difficult to justify three or four or five classes of GP server, as had recently been the normal situation.

User

For users, stratification means less long-term selection for "class" of machine. The typical customer could once buy servers for classifications such as workgroups, departments, applications, enterprises, and "super-enterprises", which were generally mapped to the size of the population being supported. In the long term, we expect those classifications to change to appliances, workgroup/department, and enterprise. These will be mapped by both functionality and size of supported population.

As stratification continues and variety decreases, competition will become more intense between the major suppliers. We expect the eventual winners to provide customers with low cost products, designs which are flexible to reconfigure easily either by the factory assembler or the advanced user, and a seamless supply chain process including rapid delivery of completed product.

Vendor

Stratification will herald the next round of consolidation. As products have fewer classes into which they can be placed, a premium will be placed on those companies who can produce good, flexible designs in the shortest time, with minimum effort and maximum reuse. Not every vendor is up to the task, and so pressure will mount to produce or "move over".

We expect Compaq and Dell to maintain their lead roles in the server market. We expect IBM and HP will have a tougher time keeping up with the pace, although they provide other benefits such as support for multi-architecture environments.

It should be noted that as this trend continues, it will repeat the trend from the desktop PC market. We predicted some time ago that once servers became less "black magic" and more commoditized, the server market would repeat the PC market trend. As has been repeated in every maturing market since capitalism was invented. The vendors who cannot keep ahead of the trend wave must seek out other avenues of differentiation, such as in service and support.

Product

For the general purpose servers, we expect to see designs become even more modular than they already are. Savvy vendors presently develop chassis which can be used for a multitude of products. We expect to see this be refined even more, although we don't expect to see chassis build themselves, yet. We do expect chassis design eventually to allow the user to both diagnose and repair their unit with a minimum of fuss and bother. Of course, by that time, servers should have uptime in the "six 9s" range (i.e. 99.9999% uptime, equal to around 30 seconds per year, about the same as the telephone system used to maintain), We say that with tongue only partly in cheek - server uptime is becoming another Holy Grail. Today's upper limit guarantees of 99.99% uptime equates to around five hours per year - pretty darn good when many Windows users have to reboot their desktop at least once every few days.

BOTTOM LINE
Conclusions

These trends show that, for the most part, the generalized Intel server market has now reached maturity. The main players are known, and mostly in control. The barriers to entry are high and getting higher. Even the nascent server appliance market - presently the only area not dominated by any of the Big Four - can be viewed as an evolutionary or revolutionary, depending on one's definition. (We see it as both, actually: revolutionary from a market/marketing standpoint; evolutionary from a technology and "next logical step" standpoint. We tend to think of it in automotive terms: appliances are analogous to the two-seater sports car; general purpose-servers analogous to the family sedan or station wagon.) The various consolidation and segmentation trends also point toward market maturity. The Linux/NT battles do not change the fundamental focus of the server market, only the dynamics of selection within the market.

Market maturity does not equate to stagnation - there are still pitched battles going on between Compaq and Dell for domination of the Intel server world, and the IBM and HP (distant #3 and #4 in the US market) keep fighting back with newer systems and increasing innovation. All of the Big Four, and some of the smaller players in the general-purpose market, are trying to pack ever-increasing functionality into the same space.


SOURCE:
http://www.technologyevaluation.com/research/articles/intel-server-trends-15735/

Intel Server Trends

|

The product space being examined is Intel-based servers (still known as "PC servers" in some areas). In this note, we will examine how five key trends affect users, vendors, and the product(s).

In the Intel-based server market, there are five trends of note:

1. Market consolidation
2. Consolidation of servers within a company
3. Focused functionality, e.g., "server appliances"
4. Movement toward Linux/Linux vs. NT
5. Stratification/segmentation

Each of these trends addresses a different area of the server world and how the market views it:

* Market consolidation relates to the vendor landscape

* Server consolidation relates to the supply and demand of classes of server

* Focused functionality addresses the feature set customers can now expect as well as the partitioning of features/functionality

* Linux movement addresses the operating environment and architecture for servers;

* stratification/segmentation deals with the growing divide between high-end systems and low-end systems; also deals with the reduction of categories into which servers are classified.

Trends and Discussion

1) Market consolidation

General

"Market consolidation" is the name for the phenomenon where an ever-increasing share of a given market is owned by an ever-decreasing (or static) set of vendors. In the case of Intel-based servers, the four key vendors are the "Big Four" - Compaq, Dell, IBM, and HP. In the last two years, market share for the Big Four has gone from approximately 65% of the market to 75% or higher, representing a raw share growth of approximately 15%.

Users

For the user, this trend is a double-edged sword.

On one hand, this narrows the field of viable vendors from which to choose. In cases where customers had not pre-determined which vendor's products would be used, diversity (i.e., non-consolidation) left the field wide open, perhaps too wide open. (The lack of "definition" in the market meant that just about any vendor could stick a CPU or two in a chassis, add a SCSI or RAID controller and some RAM and disk drives and Presto! now there's a server!) Consolidation also means fiercer competition between the remaining vendors. Increased competition usually results in users benefits, either through reduced pricing as vendors fight to increase market share, or through increased functionality available.

On the other hand, users may now find their choices effectively limited, similar to network TV before the rise of cable. While the product selection available is adequate and suitable for the needs of most customers, there will always be some percentage whose needs go unmet or unsatisfied. For users to feel a severe impact, we believe the Big Four would have to become the Big Three or Big Two. Even with further consolidation, we expect each vendor to provide approximately ten separate models from which to choose. (Currently, HP provides 11, Dell and IBM provide ten, and Compaq provides 11, plus three more about to be retired.) At some point, though, vendors may decide that multiple alternatives is a losing strategy.

Vendors

As with users, market consolidation can be double-edged.

On the positive side, as consolidation continues, the companies left standing are reasonably strong, and need focus only on the remaining competitors. Even if some small players may develop a server with groundbreaking functionality or technology (10% probability), the technology will quickly be assimilated by one or more of the big players. The variant of this is the development of new markets (e.g., server appliances). This market was created by small players, and while some of the Big Four took longer than others to catch on, all of the them are now in the game.

On the downside, the competition between the major players will intensify. Before consolidation, big vendors could take advantage of their superior strength and perceived corporate viability to overwhelm the smaller players. With four strong players vying for the same IT dollars, the match is more even, thus more intense.

Product

Although consolidation does not guarantee product line reduction by the survivors, history indicates it is a likely outcome in the long term. An example is Compaq's reduction of ProLiant server models from 12 to nine. This reduction is primarily due to Compaq having too many models covering the same market "space". However, it is unlikely Compaq would feel as great a need to cull models if the competition (i.e., Dell) weren't breathing down its neck.

2) Server Consolidation

General

In general terms, "server consolidation" translates into "replace a large bunch of smaller servers with a smaller bunch of large servers". This is due to the confluence of a number of factors:

1. Processing power and density has increased Eight-CPU servers are now available from the Big Four, providing the ability to deliver far more transaction-processing power than was available from servers two-three years ago.

2. Computing requirements have increased As applications and operating systems have hogged more and more computing resources, a commensurate increase is needed in computing power. However, this power is not always needed, leaving resources for other applications to be migrated from smaller machines. In addition, company growth often forces IT Directors to upgrade their computing infrastructure incrementally.

3. Floor space is costly Despite the increase in computing power, customers - especially ones with large computer centers - are trying to stuff as much as they can into ever-smaller spaces. Since outward/sideways expansion does nothing to lower the per-square-foot cost, users prefer to expand upward. This leads to the use of large equipment racks; into which appropriately designed servers can be mounted.

4. Administration of large numbers of servers can be difficult It is much easier for a system administrator (sysadmin) to keep an eye on 10 eight-CPU systems than on 40 two-CPU systems, even with the ability to set alarms, guardbands, and other methods of monitoring and administration. In addition to the ability to manage the monitoring software more closely, being able to keep a number of servers in close proximity to each other also provides better visual feedback in the event of a fault.

User

For the user/customer, this means there is now the ability in some cases to move everything from a bunch of single-processor boxes into one box. Some companies have instituted trade-in policies to provide further incentives for customers. The benefits of consolidation are most appropriate for users who are trying to move a bunch of smaller applications (such as databases) into a centralized computing structure, or who have two or three larger applications that they want to run on one system.

The downside to server consolidation is the increased risk associated with having a single point of failure. Most high-end general purpose servers now come with redundancy and hot-swappability for power, fans, disks, and PCI cards such as network interface cards, but all that may still not be enough to prevent unwanted downtime. A typical Windows NT uptime "guarantee" is 99.5% - sounds impressive until you realize that translates into about 43 hours of downtime per year. While maintenance (one component of downtime) can often be scheduled, there will almost certainly still be failures.

Vendor

Server consolidation means vendors will need to continue focusing on delivering high performance, high value systems. For the reasons described above, a large number of customers will want to move away from their one- and two-CPU older servers. This will increase the already strong price pressure on small/workgroup servers. Vendors are more willing to take miniscule profit margins (typically 15%) on workgroup servers in exchange for the higher margins (typically 35+%) delivered by the high-end servers. In essence, manufacturers view the low-end servers as a necessary evil.

Product

As more applications get consolidated onto large servers, tuned performance will increase in importance. This does not mean each box will be individually "tuned". Rather, it means that manufacturers will increasingly review the application load and type anticipated by the user, and provide a "pre-packaged" system that addresses most of the cases expected. Dell is already providing server sizing tools on its website. These tools are for specific applications, such as Microsoft Exchange, SAP R/3, Oracle 8i, and Novell ICS, and provide customers with rough guidelines as to what their system might need for CPUs, memory, and storage. Because it is becoming more difficult for a particular server to be "all things to all people" we expect to see more of this sizing/tuning focus in the future.

3) Focused Functionality

General

The flip side of the more powerful general purpose (GP) server, such as those used for server consolidation, is the focused server, popularly referred to as the "server appliance". This segment grew out of a couple of needs:

1. The need to perform a few specific and focused tasks very well.
GP servers that are "tuned" to provide optimal all-around performance usually perform no task exceptionally well. A wise man once said "You can't optimize for more than one factor at a time". This holds true for servers as well - if you optimize for database retrieval, you are causing another factor (such as Web serving or caching) to degrade.
2. The unwillingness of users to pay for lots of extra/unneeded functionality.
This is the natural by-product of #1 - you need to perform Task A really really well, but you don't want to be forced to pay for the overhead/infrastructure required to perform Tasks C and D.

The rise of the Internet helped force the issue, making people focus on the above factors a lot more than they had previously. For example, companies were accustomed to using large GP servers to interface to the Web. When someone finally realized that you could provide specific, targeted functionality to enhance Web surfing by means of caching technology (storing commonly accessed functions or pages at a local site, rather than on the Web proper), it opened up an entirely new market segment. Server appliances, no longer limited to caching, are the fastest growing segment of the server market. Some estimates are in the 75% CAGR range, about triple the growth rate for the general purpose server market.

User

Users benefit from server appliances by paying only for the functionality they need. Most users who need/want server appliances will actually need a mix of GP servers and appliances. However, this will still save most users money because at least part of the unneeded functionality will no longer drain money. In addition, targeted appliances usually provide improved performance. In the case of Web caching, being able to load a page in three seconds instead of 30 means more productivity and less frustration at the desktop. (Yes, saving 27 seconds is not much of a productivity boost. The idea is that everyone saves that much time on most of their Web surfing [business-related, of course] through out the business day.)

Vendor

Most of the Big Four Intel server vendors have appliances available to customers. Implementations vary from loading software such as Novell's ICS onto an existing system (Dell) to OEMing someone's "hot box" and loading caching software onto it (IBM) to OEMing a unit without modification (Gateway) to developing their own system from scratch (Compaq and startups such as Cobalt and CacheFlow). Presently, it is unclear which strategy will yield the best long-term results.

Consolidation in this market has already started: Quantum bought Meridian, maker of the Snap! server, IBM bought Whistle Communications, maker of a small caching server. We expect some modest proliferation for the next year, followed by a longer period of consolidation.

Product

As mentioned earlier, server appliances made their biggest splash in the caching market. This is primarily due to the glamour associated with all things Webby. Other areas of significant growth include Web serving (lots of ISPs want tons of small servers to provide redundancy and high performance) and Network-Attached Storage (NAS). Currently, the NAS market is "owned" by Network Appliance, although there are other players such as Procom (recently involved in OEMing to Hewlett-Packard) and Auspex (who appears to be losing significant ground and whom we do not expect to survive independently).

4) Linux vs. NT

General

Few issues generate more heat and emotion than the operating system battles of the last three years. The server OS market used to be Unix (of various flavors), Novell NetWare (really a Network Operating System - NOS - more than a "base level" OS), and various proprietary OSes. The rise of Windows NT added a fourth leg to this market. As NT gained market share, as well as dominating the business desktop, people started seeking an alternative. The Unix market per se was too fragmented to provide a unified "defense" against the encroachment of NT. It appeared that NT would overcome the server OS space much as Windows had demolished other desktop OSes.

In the early 1990s, a then unknown Finnish student named Linus Torvalds "created" a free version of Unix, dubbed it Linux, and published the source code for the world to see. The idea was to have the programming world improve Linux a little bit at a time. "From these humble beginnings" as Bulwer-Lytton might write, sprang the OS that has now attained a #2 share in the server OS market for 1999. You can still get Linux for free, although most corporate entities choose to pay Red Hat, Turbo Linux, Caldera, or one of the other Linux distributors, in order to get documentation and customer support.

Windows NT is still too strong in corporations to be overtaken anytime soon. However, as more and more companies opt for Linux, NT's (or Windows 2000's) position will become more tenuous, and perhaps real competition will arise.

User

Corporate users should not choose Linux solely because it is "not Microsoft". Linux has a number of factors in its favor, such as scalability, robustness, and (if configured properly) security. Each situation where Linux is being considered must be evaluated on its own merits, and not just because Linux is the Next Big Thing.

A key concern of customers is the fragmentation of the Linux market, similar in some ways to that of Unix in general. We expect consolidation to start by the end of 2000. We also expect Red Hat (currently owning approximately 65% of the commercially-available Linux market share) to be one of the survivors - no surprise there. Although Corel's Linux distribution is supposed to be the easiest to load, plus able to run Windows applications, it is a desktop-oriented product, so we don't include it in our assessment.

Vendor

Unsurprisingly, hardware vendors waited until 1999 to provide Linux in their servers. So far Dell is the only vendor with enough courage to factory-install Linux on its servers, but we expect others to follow suit in 2000. (Note that a lot of the Linux-oriented statements to come from server vendors in 1999 were merely "We will support Linux on some/all of our models", without any commitment to factory-install it.)

Vendors will need to intensify their commitment to Linux, at least for the short term. We believe Linux will not go away in the next 1-2 years, so vendors should make the most of it, or cede that market to Dell and whomever else is willing to commit to Linux. Most hardware vendors have formed relationships with at least one or two Linux distributors. We expect 2000 will bring semi-exclusive alliances, similar in ways to the Dell/Intel hardware alliance. We expect those alliances to be driven as much from the software side as the hardware side.

Product

Presently there is no "one size fits all" distribution of Linux. As mentioned earlier, this perpetuates fragmentation in the marketplace, providing an opening for Microsoft to apply some more FUD. To combat the continuing arrows expected from Microsoft, at least one (and probably more) of the Linux distributors needs to bundle "missing" functionality into their version. Although this might be considered a parallel to the bundling that got Microsoft in trouble with the Department of Justice (DOJ), this can be thought of as a matter of long-term survival for the Linux vendors. Corporate IT managers will do what's best for their company, and if Linux does not provide as much needed functionality as Windows NT/2000, then the IT manager will stick with the safe bet of Microsoft.

5) Stratification/segmentation

General

By stratification, we mean the quasi-polarization of the server market into groups with a more tenuous connection than had existed in the past. Specific groups we see arising are the very-high-end servers, the mid-range workhorse servers, and the focused servers such as server appliances. Stratification is the by-product of two other trends: server consolidation and focused functionality. As each of these markets grow, it will be more difficult to justify three or four or five classes of GP server, as had recently been the normal situation.

User

For users, stratification means less long-term selection for "class" of machine. The typical customer could once buy servers for classifications such as workgroups, departments, applications, enterprises, and "super-enterprises", which were generally mapped to the size of the population being supported. In the long term, we expect those classifications to change to appliances, workgroup/department, and enterprise. These will be mapped by both functionality and size of supported population.

As stratification continues and variety decreases, competition will become more intense between the major suppliers. We expect the eventual winners to provide customers with low cost products, designs which are flexible to reconfigure easily either by the factory assembler or the advanced user, and a seamless supply chain process including rapid delivery of completed product.

Vendor

Stratification will herald the next round of consolidation. As products have fewer classes into which they can be placed, a premium will be placed on those companies who can produce good, flexible designs in the shortest time, with minimum effort and maximum reuse. Not every vendor is up to the task, and so pressure will mount to produce or "move over".

We expect Compaq and Dell to maintain their lead roles in the server market. We expect IBM and HP will have a tougher time keeping up with the pace, although they provide other benefits such as support for multi-architecture environments.

It should be noted that as this trend continues, it will repeat the trend from the desktop PC market. We predicted some time ago that once servers became less "black magic" and more commoditized, the server market would repeat the PC market trend. As has been repeated in every maturing market since capitalism was invented. The vendors who cannot keep ahead of the trend wave must seek out other avenues of differentiation, such as in service and support.

Product

For the general purpose servers, we expect to see designs become even more modular than they already are. Savvy vendors presently develop chassis which can be used for a multitude of products. We expect to see this be refined even more, although we don't expect to see chassis build themselves, yet. We do expect chassis design eventually to allow the user to both diagnose and repair their unit with a minimum of fuss and bother. Of course, by that time, servers should have uptime in the "six 9s" range (i.e. 99.9999% uptime, equal to around 30 seconds per year, about the same as the telephone system used to maintain), We say that with tongue only partly in cheek - server uptime is becoming another Holy Grail. Today's upper limit guarantees of 99.99% uptime equates to around five hours per year - pretty darn good when many Windows users have to reboot their desktop at least once every few days.

BOTTOM LINE
Conclusions

These trends show that, for the most part, the generalized Intel server market has now reached maturity. The main players are known, and mostly in control. The barriers to entry are high and getting higher. Even the nascent server appliance market - presently the only area not dominated by any of the Big Four - can be viewed as an evolutionary or revolutionary, depending on one's definition. (We see it as both, actually: revolutionary from a market/marketing standpoint; evolutionary from a technology and "next logical step" standpoint. We tend to think of it in automotive terms: appliances are analogous to the two-seater sports car; general purpose-servers analogous to the family sedan or station wagon.) The various consolidation and segmentation trends also point toward market maturity. The Linux/NT battles do not change the fundamental focus of the server market, only the dynamics of selection within the market.

Market maturity does not equate to stagnation - there are still pitched battles going on between Compaq and Dell for domination of the Intel server world, and the IBM and HP (distant #3 and #4 in the US market) keep fighting back with newer systems and increasing innovation. All of the Big Four, and some of the smaller players in the general-purpose market, are trying to pack ever-increasing functionality into the same space.


SOURCE:
http://www.technologyevaluation.com/research/articles/intel-server-trends-15735/