Тексты (texts_l5)Посмотреть архив целиком
L E S S O N 5
Слова и словосочетания для запоминания
break v - прерывать, останавливать
bundle up v – связать
come up with v – внести
follow v – контролировать
impact n- влияние
remote procedure call - дистанционный вызов процедуры
run v – выполнить
subroutine n – подпрограмма
turnaround a - оборотный
Implementing a Distributed Process between Workstation and Supercomputer
Distributed processing seems to have become one of the latest high tech terms that members of the computer society all seem to be using, but perhaps not really understanding. It is desirable to come up with a simple definition of distributed processing.
A software product which takes advantage of more than one computer system in order to produce a result
There are two main reasons of why a distributed process is desirable: time and money.
Tasks that take a long time to compute are the most painful for the user, and these are the tasks that must be optimized for speed. Communication and communications time may be a very large factor against distributed processing. For many cases, the communication time can be significantly greater than the computational time., thus creating an environment in which distributed processing does not seem to be practical or desirable
Communications is the fundemental component in any distributed application. The development of extremely large networking systems has complicated the task of creating distributed applications. The goal of any good communications service or system must be to provide both the user and the application developer with an environment for communications in which all the most fundemental aspects of establishing and maintaining a connection are cared for transparently.
The socket is the basic building block out of which all distributed applications are constructed using TCP. A socket provides a bidirectional stream of communications between two programs runnig on different machines. Data to be sent into one end of a socket appears at the other end within the remotely running program. TCP uses a client/server style of relationship between programs The client program initiates communications by requesting a connection with a particular program runnig on a specific machine. It is the duty of the server program to accept connection requests from client programs. This leads to the notion of the server process as one which provides a particular service and the client process as one requiring it.
The use of TCP at the socket level has an impact on the structure of a distributed application. All data transfer between the different pieces of a distributed code must be done using the read/write style interface Often times it is easier to distribute an application by looking for subroutines which could be run on a remote machine. Particularly in the case of distributing an already existing code, breaking code out at the subroutine level is far simplier than restructuring the code to make us use of the straight socket form of connection. There are several packages available to aid the developer in performing task distribution at the subroutine level, the most notable of these is RPC (remote procedure call).. The subroutine which was removed from the application is moved into a server program along with other subroutines which are to be distributed. The server program, which is run on a remote system, waits for processing requests from the running application. When a request is received, the proper subroutine is called along with the passed parameters, Upon returning, the output parameters are bundled up and send back to the calling program.
MPGS is a multipurpose graphics post processing system which runs on a graphics workstation, and distribute cpu and memory intensive tasks to a Cray supercomputer
The user begins a MPGS session by starting the workstation side of MPGS, followed by the Cray side. The Cray side of MPGS will ask for the network address of the station, and then establishes the communication connection. Next, the user requests MPGS to read the data to be post processed. The Cray side of MPGS will read the data and then download visible portions of the data to the workstation. Down loading only a portion of the data to the workstation, i.e. the visible portions of the data is a very important aspects of minimizing network traffic, and is vital to the success of MPGS.
What is distributed processing?
In what cases is distributed processing practical?
What is the function of a socket?
How does MPGS minimize network traffic?
Переведите предложения, обращая внимание на функции инфинитива
l. Our task is to get good results.2.The RTK Computer Network is likely to provide a model for how to make public information electronically accessible.3 We know the first central electric stations to have been built for supply of electric lights.4.Experimental data to be presented in detail will be discussed as soon as possible. 5.Serious efforts is now being made to overcome the difficulty.6.This process is unlikely to take place 7.To summarize the findings of this tremendous work would require many pages 8 Microchips have allowed computers to evolve from room-sized to notebook- and palm-sized devices.9 No line is to be seen when its intensity is predicted to be zero. l0. This modification of the method appears to be of great value.
Knowledge Based Control on Large Technical Simulations.
Technical simulations which require supercomputers frequently produce large amounts of complex data. To handle these data fast networks and high power graphical workstations are mandatory.
Only through the availability of supercomputers computational engineering could be developed. The goal is to simulate the behavior of engineering systems before building them. To understand the consequences of this goal one has to be aware that engineering systems like cars, computers, or power plants are very complex. Usually they are hierarchically composed from subsystems, components, subcomponents, parts, etc. Numerous engineers are specialized to develop and optimize certain subcomponents or even parts. They have special methods available, which were derived from both experimental experiences and theoretical insight. Their knowledge on the behavior of their contributions is formulated through data, correlations, equations, relations, structural information and rules. In computational engineering we try to integrate knowledge and experiences of different engineers or even engineering groups into one model which simulates the behavior of the corresponding engineering system. Of course such a model has to be very complex. It has to be able to integrate different sources of knowledge and different ways to deduct new insights. Also it has to support engineers using these models which at least partly were not developed from their own experiences. Thus tools like visualization, use friendly interfaces or systems which provide expertise about model or parts of models are very important.
A frame for the development of computational engineering systems was developed. It is called Integrated Planning and Simulation System. Software engineering combined with mechanical engineering will allow new models to be developed.
As in common engineering practice data- and knowledge bases have been separated from methods to acquire knowledge and data or to derive new information. To further reduce complexity the software engineering principals of modularization and information hiding have been utilized. Consequently the user interface was separated strictly from other parts of the system. Also all the information in the system is treated through abstract data type modules. They operate on complex data objects which are kept in hierarchically organized databases. As a result of these measures it is possible to build computational engineering models which are considered to include a similar degree of complexity as may be found in real engineering systems.
SUPERCOMPUTERS IN CLIMATE RESEARCH
Somewhat surprisingly, for en era in which satellites systematically scan every inch of the globe, a wide variety of environmentally important data are in short supply. New computerized monitoring techniques provide a means of quickly closing some of these data gaps. Beyond monitoring computer systems serve a far more important function: to model industrial and biological systems. The best known of these are likely to be the global warming models. Such simulations require powerful supercomputers, because of the thousands of factors that affect climate
Scientists are known to have theorized since 1896 that emissions of carbon dioxide from the burning of fuels could warm the global atmosphere. It was in the early 1980s, however-when computers sufficiently powerful for modeling the complex behavior of the atmosphere became available-mat they were able to test their theories. Supercomputers at the climatogical centers of the USA have been programmed to stimulate the effects of increased greenhouse gas concentration on the global climate. In minutes, they perform calculations that would take an unaided scientist a lifetime or more. This computer-based modeling of the atmosphere has produced a remarkable consensus among climatoligists about the likelihood and potential scope of global warming. If the earth's atmosphere warms by several degrees within the span of a few decades, there will be enormous impacts on the environment and the global economy. Among the predicted impacts are a rise in sea levels that would threaten coastal populations and shifts in temperatures
Climatologists are also using computers to try to determine from local temperature measurements whether such warming is already beginning to occur. The task would be difficult or impossible without supercomputers, since it involves a complex calculation based on thousands of daily readings from around the world that must be adjusted for a variety of complicating factors, including the uneven geographical distribution of monitoring stations.