Pc Vs Mainframe Essays

For other uses, see Mainframe (disambiguation).

Mainframe computers (colloquially referred to as "big iron"[1]) are computers used primarily by large organizations for critical applications; bulk data processing, such as census, industry and consumer statistics, enterprise resource planning; and transaction processing. They are larger and have more processing power than some other classes of computers: minicomputers, servers, workstations, and personal computers.

The term originally referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers.[2][3] Later, the term was used to distinguish high-end commercial machines from less powerful units.[4] Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are often used as servers.

Design[edit]

Modern mainframe design is generally less defined by single-task computational speed (typically defined as MIPS rate or FLOPS in the case of floating point calculations), and more by:

  • Redundant internal engineering resulting in high reliability and security
  • Extensive input-output facilities with the ability to offload to separate engines
  • Strict backward compatibility with older software
  • High hardware and computational utilization rates through virtualization to support massive throughput.

Their high stability and reliability enable these machines to run uninterrupted for decades.

Mainframes are defined by high availability, one of the main reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation is required to exploit these features, and if improperly implemented, may serve to inhibit the benefits provided. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM zSeries, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, Unix, and Linux.[5] Software upgrades usually require setting up the operating system or portions thereof, and are non-disruptive only when using virtualizing facilities such as IBM's z/OS and Parallel Sysplex, or Unisys's XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.

In the late 1950s, most mainframes had no explicitly interactive interface, but only accepted sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, much of which was based on repeated tape-based sorting and merging operations followed by a print run to preprinted continuous stationery. In cases where interactive terminals were supported, these were used almost exclusively for applications (e.g. airline booking) rather than program development. Typewriter and Teletype devices were also common control consoles for system operators through the 1970s, although ultimately supplanted by keyboard/display devices.

By the early 1970s, many mainframes acquired interactive user interfaces[NB 1] and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphical terminals, and terminal emulation, but not graphical user interfaces. This format of end-user computing reached mainstream obsolescence in the 1990s due to the advent of personal computers provided with GUIs. After 2000, most modern mainframes have partially or entirely phased out classic "green screen" terminal access for end-users in favour of Web-style user interfaces.[citation needed]

The infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes could reduce data center energy costs for power and cooling, and that they could reduce physical space requirements compared to server farms.[6]

Characteristics[edit]

Modern mainframes can run multiple different instances of operating systems at the same time. This technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.[7]

Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions.[citation needed] Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case),[citation needed] or with shared, geographically dispersed storage provided by EMC or Hitachi.

Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the late-1950s,[NB 2] mainframe designs have included subsidiary hardware[NB 3] (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual.[8] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online,[9] and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing.

Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors.

Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.[citation needed]

Current market[edit]

IBM, with z Systems, is a major manufacturer in the mainframe market. Unisys manufactures ClearPath Libra mainframes, based on earlier Burroughs MCP products and ClearPath Dorado mainframes based on Sperry UnivacOS 1100 product lines. In 2000, Hitachi co-developed the zSeries z900 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's GCOS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe, and Fujitsu(formerly Amdahl) GS21 mainframes globally. NEC with ACOS and Hitachi with AP8000-VOS3[10] still maintain mainframe hardware businesses in the Japanese market.

The amount of vendor investment in mainframe development varies with market share. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems. Bull uses a mixture of Itanium and Xeon processors. NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series. IBM continues to pursue a different business strategy of mainframe investment and growth.[citation needed] IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2012's 5.5 GHz six-core zEC12 mainframe microprocessor. Unisys produces code compatible mainframe systems that range from laptops to cabinet sized mainframes that utilize homegrown CPUs as well as Xeon processors. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[11]

Furthermore, there exists a market for software applications to manage the performance of mainframe implementations. In addition to IBM, significant players in this market include BMC,[12]Compuware,[13][14] and CA Technologies.[15]

History[edit]

Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs":[16]:p.83 usually Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. While IBM's zSeries can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the US were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer.

Shrinking demand and tough competition started a shakeout in the market in the early 1970s—RCA sold out to UNIVAC and GE sold its business to Honeywell; in the 1980s Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986.

During the 1980s, minicomputer-based systems grew more sophisticated and were able to displace the lower-end of the mainframes. These computers, sometimes called departmental computers were typified by the DEC VAX.

In 1991, AT&T Corporation briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorld's Stewart Alsop famously predicted that the last mainframe would be unplugged in 1996; in 1993, he cited Cheryl Currid, a computer industry analyst as saying that the last mainframe "will stop working on December 31, 1999",[17] a reference to the anticipated Year 2000 problem (Y2K).

That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds of virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000, IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS shipments increased 4% per year over the past two years.[18] Alsop had himself photographed in 2000, symbolically eating his own words ("death of the mainframe").[19]

In 2012, NASA powered down its last mainframe, an IBM System z9.[20] However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology — hardware, software and services — remains a large and lucrative business for I.B.M., and mainframes are still the back-office engines behind the world’s financial markets and much of global commerce".[21] As of 2010[update], while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".[22]

In 2015, IBM launched the IBM z13[23] and on June 2017 the IBM z14.[24][25]

Differences from supercomputers[edit]

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers are used for scientific and engineering problems (high-performance computing) which are data crunching and number crunching,[26] while mainframes are used for transaction processing. The differences are as follows:

  • Mainframes are built to be reliable for transaction processing (measured by TPC-metrics; not used or very helpful for most supercomputing applications) as it is commonly understood in the business world: a commercial exchange of goods, services, or money.[citation needed] A typical transaction, as defined by the Transaction Processing Performance Council,[27] would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another. This operation doesn't count toward the processing power of a computer. Transaction processing is not exclusive to mainframes but also used in the performance of microprocessor-based servers and online networks.
  • Supercomputer performance is measured in floating point operations per second (FLOPS)[28] or in traversed edges per second or TEPS;[29] metrics that are not very meaningful for mainframe applications; while mainframes are sometimes approximately measured in millions of instructions per second (MIPS),[30] (a metric not used on supercomputers; as not very helpful, it's arguably neither for mainframes to measure real performance of the transaction processing goal, but such or similar sub-component of it, may be used for billing purposes). Examples of integer operations (the instructions counted by MIPS) include adding numbers together, checking values or moving data around in memory (while moving information to and from storage, so-called I/O is most helpful for mainframes; and within memory, only helping indirectly). Floating point operations are mostly addition, subtraction, and multiplication (of binary floating point in supercomputers; measured by FLOPS) with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations (only recently standardized decimal floating point, not used in supercomputers, are appropriate for monetary values such as those useful for mainframe applications). In terms of computational ability, supercomputers are more powerful.[31]

In 2007,[32] an amalgamation of the different technologies and architectures for supercomputers and mainframes has led to the so-called gameframe.

See also[edit]

Notes[edit]

References[edit]

  1. ^"IBM preps big iron fiesta". The Register. July 20, 2005. 
  2. ^"mainframe, n". Oxford English Dictionary (on-line ed.). 
  3. ^Ebbers, Mike; O’Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics"(PDF). IBM International Technical Support Organization. Retrieved 2007-06-01. 
  4. ^Beach, Thomas E. "Computer Concepts and Terminology: Types of Computers". Archived from the original on July 30, 2015. Retrieved November 17, 2012. 
  5. ^"National Vulnerability Database". Retrieved September 20, 2011. 
  6. ^"Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. Retrieved December 28, 2009. 
  7. ^"Emulation or Virtualization?". 
  8. ^"Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. Retrieved 2008-05-16. 
  9. ^"Improvements in Mainframe Computer Storage Management Practices and Reporting Are Needed to Promote Effective and Efficient Utilization of Disk Resources".  
  10. ^Hitachi AP8000 - VOS3
  11. ^"IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007. 
  12. ^"Mainframe Automation Management". Retrieved 26 October 2012. 
  13. ^"Mainframe Modernization". Retrieved 26 October 2012. 
  14. ^"Automated Mainframe Testing & Auditing". Retrieved 26 October 2012. 
  15. ^"CA Technologies". 
  16. ^Bergin, Thomas J (ed.) (2000). 50 Years of Army Computing: From ENIAC to MSRC. DIANE Publishing. ISBN 0-9702316-1-X. 
  17. ^Alsop, Stewart (Mar 8, 1993). "IBM still has brains to be player in client/server platforms". InfoWorld. Retrieved Dec 26, 2013. 
  18. ^"IBM 4Q2009 Financial Report: CFO's Prepared Remarks"(PDF). IBM. January 19, 2010. 
  19. ^"Stewart Alsop eating his words". Computer History Museum. Retrieved Dec 26, 2013. 
  20. ^Cureton, Linda (11 February 2012). The End of the Mainframe Era at NASA. NASA. Retrieved 31 January 2014. 
  21. ^Lohr, Steve (March 23, 2008). "Why Old Technologies Are Still Kicking". The New York Times. Retrieved Dec 25, 2013. 
  22. ^Ante, Spencer E. (July 22, 2010). "IBM Calculates New Mainframes Into Its Future Sales Growth". The Wall Street Journal. Retrieved Dec 25, 2013. 
  23. ^Press, Gil. "From IBM Mainframe Users Group To Apple 'Welcome IBM. Seriously': This Week In Tech History". Forbes. Retrieved 2016-10-07. 
  24. ^"IBM Mainframe Ushers in New Era of Data Protection". 
  25. ^"IBM unveils new mainframe capable of running more than 12 billion encrypted transactions a day". CNBC. 
  26. ^High-Performance Graph Analysis Retrieved on February 15, 2012
  27. ^Transaction Processing Performance Council Retrieved on December 25, 2009.
  28. ^The "Top 500" list of High Performance Computing (HPC) systems Retrieved on July 19, 2016
  29. ^The Graph 500Archived 2011-12-27 at the Wayback Machine. Retrieved on February 19, 2012
  30. ^Resource consumption for billing and performance purposes is measured in units of a million service units (MSUs), but the definition of MSU varies from processor to processor in such a fashion as to make MSU's/s useless for comparing processor performance.
  31. ^World's Top Supercomputer Retrieved on December 25, 2009
  32. ^"Cell Broadband Engine Project Aims to Supercharge IBM Mainframe for Virtual Worlds". 26 April 2007. 

External links[edit]

  1. ^In some cases these had been introduced in the 1960s, but their deployment became more common in the 1970s
  2. ^E.g., the IBM 709 had channels in 1958
  3. ^sometimes computers, sometimes more limited

The classroom looked like a call center. Long tables were divided by partitions into individual work stations, where students sat at computers. At one of the stations, a student was logged into software on a distant server, working on math problems at her own pace. The software presented questions, she answered them, and the computer instantly evaluated the answer. When she answered a sufficient number of problems correctly, she advanced to the next section. The gas-plasma monitors attached to the computers displayed the text and graphics with a monochromatic orange glow.

This was in 1972. The computer terminals were connected to a mainframe computer at the University of Illinois at Urbana-Champaign, which ran software called Programmed Logic for Automated Teaching Operations (PLATO). The software had been developed in the nineteen-sixties as an experiment in computer-assisted instruction, and by the seventies a nationwide network allowed thousands of terminals to simultaneously connect to the mainframes in Urbana.* Despite the success of the program, PLATO was far from a new concept—from the earliest days of mainframe computing, technologists have explored how to use computers to complement or supplement human teachers.

At first glance, this impulse makes sense. Computers are exceptionally good at tasks that can be encoded into routines with well-defined inputs, outputs, and goals. The first PLATO-specific programming language was TUTOR, an authoring language that allowed programmers to write online problem sets.* A problem written in TUTOR had, at minimum, a question and an answer bank. Some answer banks were quite simple. For example, the answer bank for the question “What is 3 + 2?” might be “5.” But answer banks could also be more complicated, by accepting multiple correct answers or by ignoring certain prefatory words. For instance, the answer bank for that same question could also be “ (5, five, fiv).” With this more sophisticated answer bank, TUTOR would accept “5,” “five,” “fiv,” “it is 5,” or “the answer is fiv” as correct answers.

This sort of pattern-matching was at the heart of TUTOR. Students typed characters in response to a prompt, and TUTOR determined if those characters matched the accepted answers in the bank. But TUTOR had no real semantic understanding of the problem being posed or the answers given. “What is 3+2?” is just an arbitrary string of characters, and “5” is just one more arbitrary character. TUTOR did not need to evaluate the arithmetic of the problem. It could simply evaluate whether the syntax of a student’s answer matched the syntax of the answer bank.

Humans are much slower than computers at this kind of pattern-matching, as anyone who has graded a stack of homework can attest, and as a response educators have developed a variety of technologies to speed up the process. Scantron systems allow students to encode answers as bubbles on multiple-choice forms, and optical-recognition tools quickly identify whether the bubbles are filled in correctly. In eighteenth-century America, one-room schoolhouses employed the monitorial method, in which older students evaluated the recitations of younger ones. Younger students memorized sections of textbooks and recited those sections to older students, who had previously memorized the same sections and had the book in front of them for good measure. Monitors often did not understand the semantic meaning of the words being recited, but they insured that the syntactical input (the recitation) matched the answer bank (the textbook). TUTOR and its descendants are very fast and accurate versions of these monitors.

Forty years after PLATO, interest in computer-assisted instruction is surging. New firms, such as DreamBox and Knewton, have joined more established companies like Achieve3000 and Carnegie Learning in providing “intelligent tutors” for “adaptive instruction” or “personalized learning.” In the first quarter of 2014, over half a billion dollars was invested in education-technology startups. Not surprisingly, these intelligent tutors have grown fastest in fields in which many problems have well-defined correct answers, such as mathematics and computer science. In domains where student performances are more nuanced, machine-learning algorithms have seen more modest success.

Take, for instance, essay-grading software. Computers cannot read the semantic meaning of student texts, so autograders work by reducing student writing to syntax. First, humans grade a small training set of essays, which then go through a process of text preparation. Autograders remove the most common words, like “a,” “and,” and “the.” The order of words is then ignored, and the words are aggregated into a list, evocatively called a “bag of words.” Computers calculate different relationships among these words, such as the frequency of all possible pairwise combinations of any two words, and summarize these relationships as a quantitative expression. For each document in the training set, the autograder then correlates the quantitative representation of the syntax of the essay with the grade assigned by the human, the assessment of semantic quality.

The final step is pattern-matching. The algorithm searches through each essay, compares the syntactic patterns in the ungraded essay to the pattern of essays in the training set, and assigns a grade based on its similarity with those syntactic patterns. In other words, if an ungraded bag of words has the same quantitative properties as a high-scoring bag of words from the training set, then the software assigns a high score. If those syntactic patterns are more similar to a low-scoring essay, the software assigns a low score.

In some ways, grading by machine learning is a marvel of modern computation. In other ways, it’s a kluge that reduces a complex human performance into patterns that can be algorithmically matched. The performance of these autograding systems is still limited and public suspicion of them is high, so most intelligent tutoring systems have made no effort to parse student writing. They stick to the parts of the curriculum with the most constrained, structurally defined answers, not because they are the most important but because the pattern-matching is easier to program.

This presents an odd conundrum. In the forty years since PLATO, educational technologists have made progress in teaching parts of the curriculum that can be most easily reduced to routines, but we have made very little progress in expanding the range of what these programs can do. During those same forty years, in nearly every other sector of society, computers have reduced the necessity of performing tasks that can be reduced to a routine. Computers, therefore, are best at assessing human performance in the sorts of tasks in which humans have already been replaced by computers.

Perhaps the most concerning part of these developments is that our technology for high-stakes testing mirrors our technology for intelligent tutors. We use machine learning in a limited way for grading essays on tests, but for the most part those tests are dominated by assessment methods—multiple choice and quantitative input—in which computers can quickly compare student responses to an answer bank. We’re pretty good at testing the kinds of things that intelligent tutors can teach, but we’re not nearly as good at testing the kinds of things that the labor market increasingly rewards. In “Dancing with Robots,” an excellent paper on contempotary education, Frank Levy and Richard Murnane argue that the pressing challenge of the educational system is to “educate many more young people for the jobs computers cannot do.” Schooling that trains students to efficiently conduct routine tasks is training students for jobs that pay minimum wage—or jobs that simply no longer exist.

Photograph: Peter Marlow/Magnum

Correction: An earlier version of this post suggested that Michigan hosted thousands of terminals connected to the PLATO network. It also incorrectly suggested that TUTOR was the first programming language used on PLATO.

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *