Hardware and software concepts in distributed system
It is driven by many entities including the physical and tangible components that we can touch or feel, called the Hardware and programs and commands that drive the hardware, called the Software. The Software refers to the set of instructions fed in form of programs to govern the computer system and process the hardware components. For example: The antivirus that we use to protect our computer system is a type of Software.
The media players that we use to play multimedia files such as movies, music etc. The Microsoft Office we use to edit the documents is a Software. Depending on its use and area of implementation, Softwares can be divided into 3 major types: System Software Application Software Utility Software System Software These are the software that directly allows the user to interact with the hardware components of a computer system.
As the humans and machines follow different languages, there has to be an interface that will allow the users to interact with the core system, this interface is provided by the software. The system software can be called the main or the alpha software of a computer system as it handles the major portion of running a hardware. This System Software can be further divided into four major types: The Operating System — It is the main program that governs and maintains the inter-cooperation of the components of a computer system.
For eg. The Language Processor — The hardware components present in the computer system does not understand human language. In a cable television system, the cable company runs a wire down the street, and other subscribers have taps running to it from their television sets.
Switched systems do not have a single backbone like cable television. But, there are individual wires from machine to machine, having different wiring patterns in use. Messages move through the wires, and the final switching decision is made at each routing stage to route the message to an outgoing destination. The public telephone system for the whole world is organized in this way. The next uppermost dimensions in our taxonomy are that in some systems the machines are tightly coupled , and in others, they are loosely coupled.
In a tightly-coupled system, delay is short for message passing, and also the data rate is high; that means, the number of bits per second that can be transferred is large. In the loosely-coupled system, it is just the opposite of it.
The inter-machine message delay is large and the data rate is low. Tightly-coupled systems used in more applications as parallel systems working on a single problem and loosely-coupled ones tend to be used as distributed systems working on many unrelated problems , But this is not always true. One popular example is a project in which hundreds of computers all over the world worked together for factor a huge number about digits.
Each computer was assigned a different range of divisors to try, and they all worked on the problem in their given time and also reporting the results back by email when they finished the task. On the whole, multiprocessors tend to be more tightly coupled than multi-computers, because they can exchange data at memory speeds, but some fiber optic based multi-computer can also work at memory speeds.
Now study on-the-go. Find useful content for your engineering study here. Questions, answers, tags - All in one app! Ask Question. Welcome back. It allows companies to build an affordable high-performance infrastructure using inexpensive off-the-shelf computers with microprocessors instead of extremely expensive mainframes.
Large clusters can even outperform individual supercomputers and handle high-performance computing tasks that are complex and computationally intensive. Since distributed computing system architectures are comprised of multiple sometimes redundant components, it is easier to compensate for the failure of individual components i. Thanks to the high level of task distribution, processes can be outsourced and the computing load can be shared i.
Many distributed computing solutions aim to increase flexibility which also usually increases efficiency and cost-effectiveness. To solve specific problems, specialized platforms such as database servers can be integrated. For example, SOA architectures can be used in business fields to create bespoke solutions for optimizing specific business processes. Providers can offer computing resources and infrastructures worldwide, which makes cloud-based work possible.
This allows companies to respond to customer demands with scaled and needs-based offers and prices. Users and companies can also be flexible in their hardware purchases since they are not restricted to a single manufacturer.
Another major advantage is its scalability. If you choose to use your own hardware for scaling, you can steadily expand your device fleet in affordable increments. Despite its many advantages, distributed computing also has some disadvantages , such as the higher cost of implementing and maintaining a complex system architecture. In addition, there are timing and synchronization problems between distributed instances that must be addressed. In terms of partition tolerance, the decentralized approach does have certain advantages over a single processing instance.
However, the distributed computing method also gives rise to security problems , such as how data becomes vulnerable to sabotage and hacking when transferred over public networks.
Distributed infrastructures are also generally more error-prone since there are more interfaces and potential sources for error at the hardware and software level. Distributed computing has become an essential basic technology involved in the digitalization of both our private life and work life.
The internet and the services it offers would not be possible if it were not for the client-server architectures of distributed systems. Every Google search involves distributed computing with supplier instances around the world working together to generate matching search results. Google Maps and Google Earth also leverage distributed computing for their services.
Distributed computing methods and architectures are also used in email and conferencing systems, airline and hotel reservation systems as well as libraries and navigation systems. In the working world, the primary applications of this technology include automation processes as well as planning, production, and design systems.
Social networks, mobile systems, online banking, and online gaming e. Additional areas of application for distributed computing include e-learning platforms, artificial intelligence, and e-commerce. Purchases and orders made in online shops are usually carried out by distributed systems. In meteorology, sensor and monitoring systems rely on the computing power of distributed systems to forecast natural disasters. Many digital applications today are based on distributed databases.
Particularly computationally intensive research projects that used to require the use of expensive supercomputers e. The volunteer computing project SETI home has been setting standards in the field of distributed computing since and still are today in Countless networked home computers belonging to private individuals have been used to evaluate data from the Arecibo Observatory radio telescope in Puerto Rico and support the University of California, Berkeley in its search for extraterrestrial life.
A unique feature of this project was its resource-saving approach. After the signal was analyzed, the results were sent back to the headquarters in Berkeley. On the YouTube channel Education 4u , you can find multiple educational videos that go over the basics of distributed computing.
Traditionally, cloud solutions are designed for central data processing. IoT devices generate data, send it to a central computing platform in the cloud, and await a response. However, with large-scale cloud architectures, such a system inevitably leads to bandwidth problems. For future projects such as connected cities and smart manufacturing, classic cloud computing is a hindrance to growth. Autonomous cars, intelligent factories and self-regulating supply networks — a dream world for large-scale data-driven projects that will make our lives easier.
However, what the cloud model is and how it works is not enough to make these dreams a reality. The challenge of effectively capturing, evaluating and storing mass data requires new data processing concepts. With edge computing, IT The practice of renting IT resources as cloud infrastructure instead of providing them in-house has been commonplace for some time now.
While most solutions like IaaS or PaaS require specific user interactions for administration and scaling, a serverless architecture allows users to focus on developing and implementing their own projects.
The CAP theorem states that distributed systems can only guarantee two out of the following three points at the same time: consistency, availability, and partition tolerance.
In this article, we will explain where the CAP theorem originated and how it is defined.
0コメント