Computer Science
Computing Science is design, software construction, refactoring and re-engineering issues, quality control and maintenance considerations, to database structure and techniques, to data representation to metadata data, to visual representation, to user interface, to human interaction concerns, to meta programming, to figuring out how to do it better next time and faster this time.
All structured instructions are combinations of:
- execution: a simple statement
- selection: choosing which statement to do next (if)
- repetition: loops (while)
Programming is one of the best ways to make sure you understand something. Pseudo code’s the main thing my new students have a hard time with - they're not used to writing instructions with such precision. When they get the hang of it, however, they start seeing the patterns everywhere.
Computer Science is a very broad field, however it's clear that there is a definite shift lately towards the study of parallel processes. A great example of this is Pi calculus, which attempts to do for distributed programs what lambda calculus did for linear computation - formalize and study in an abstract manner. Pi calculus, while a bit tough, might be something to at least bring up briefly when mentioning mathematical methods.
You might also consider amorphous computing (which is on similar lines) as it intersects biological fields quite nicely.
Compression, encryption, and data/knowledge representation are also very interesting issues to bring up... From there things such as automated reasoning and data mining can be discussed, even if only in passing.
First and foremost, however, I tend to believe that C.S. is mostly about how to characterize and solve problems... Things like inductive reasoning, dynamic programming, and novel data structures are interesting and powerful conceptual tools which I believe can help shape an individual's view of the universe and the processes of which it consists.
Most of the confusion about what is Computer Science is that there is not enough differentiation between the science, engineering, and practice of programming. Science looks at the behaviors of programs and try to form theories about it using mathematics. Engineering looks at best practices and try to provide tools for programmers to use; hence patterns and design methodologies. The practice of programming teaches a programming language and/or operating system. It is also how to design and use database. All three teach a certain amount of theory, but their end goals are very different
Multimedia has taken the media to a greater heights. Now with the most advanced technology the creation of graphics content became more sophisticated. Ranging from animation videos to games the 3D graphics is a vital part. If you have ever amazed how they are created and wanna try creating your own here is a product which can help you do just that.
Blender is a open source 3D software which was once a commercial product but now released as a open source software.
Blender is a 3D graphics application. It can be used for modeling, UV unwrapping, texturing, rigging, water simulations, skinning, animating, rendering, particle, and other simulations, non-linear editing, compositing, and creating interactive 3D applications, including games.
Blender software has a wide range of features which is comparable to the popular commercial, proprietary softwares. Everything that can be done with a commercial software can be done freely with this software.
Blender has wide range of applications which includes modeling, animation, texturing etc., It can be use to create interactive 3D application,realistic 3D animation movies,High end Games etc.,
To know more about the features of blender visit Blender features page
Take a look at the mid blowing creations bloomed out of this wonderful software here in the art gallery
To get blender visit the download page
Blender homepage: www.blender.org
Global Knowledge:
It is not possible to implement centralized control in distributed system due to the problem imposed by the architecture itself of the system, so decentralized control is used that is all control functions are distributed among various sites of the system.
But in distributed system due to the absence of global shared memory, it is not possible to have upto date knowledge regarding processes and resources and in the absence of updated global state of distributed system, it is also difficult to implement decentralized control.
Another important function of Operating system is scheduling which is mainly done on the basis of arrival time of the processes. But in case of distributed systems due to the absence of global physical clock, scheduling is also difficult to implement.
Naming:
A name refers to an object such as computer, printer, file, a service etc. One of the service is naming service which is implemented using look up procedure. In table look up procedure, tables or directories are used, which contains the logical name. Such directories can be implemented in two ways.
Approach1
A directory is replicated at each site in order to avoid single point of failure & in order to increase the availability of naming service.
Disadvantage: More space is required.
Consistancy problem that is any modification in a directory at one site should also be reflected in same copy at other site.
Approach 2
Directory is partitioned into various blocks and then these blocks are distributed among various sites. The problem is that when a name is referred then how it will be possible to find the location of the corresponding block.
Solution: Another directory is created which contains the information regarding location of these blocks and then this directory is replicated at various sites.
Scalability:
It refers to the concept that when system grows then it should not result in the unavailability of the system or performance degradation. Example: Broadcast based protocols works well for small system but not for large systems. Suppose a distributed system is locating the file by broadcasting the queries then in such system each computer has to handle the message overhead & as the system grows there will be more such queries resulting in more message overhead & degrading the performance of the system.
Compatability:
Compatability refers to inter-operability among resources that is a resource can be used from any computer or it can be used in combination with other resource. There are three levels of compatibility.
Binary Level Compatability: A system is binary level compatable, if all machines are executing same instruction set that is all of them have same architecture. Most of the operating system functionalities are dependent on underline machine architecture & since architecture is same for all machines so operating system codes will be easy to implement.
Execution Level Compatability: A system is having execution level compatability if the same code can be compiled & executed on any machine of the system.
Protocol Level Compatability: A system is having protocol level compatability, if all machines are sharing same set of rules. It is only due to protocol level compatibility, machines running on different operating system is possible.
Process Synchronization:
Processes running on different machines must access the shared resource in a mutually exclusive manner. In such system, a process can request or release resources at any timeand in any order which may lead to deadlock. Such deadlocks must be detected as early as possible otherwise system performance will degrade.
Resource Management:
In such system, a user must access the remote resources with as much ease as it can access the local resources. Resources can be made available in three ways:
Data Migration: In this approach data is transferred from its source to the location of computation and if any modification is made in the data then it is also reflected at the source side.
Computation Migration: In this approach, computation itself is migrated to the location where required data is present.
Process Synchronization: In this approach, the entire process is migrated to the location where data is present. In distributed system, usually the processes are executed on the sites where they are generated in order to support load sharing.
Security
Two main aspect of security are: Authorization, it basically involves assigning access rights to the users regarding resources. Authentication, it involves the checking of access rights of users while accessing a particular resource.
Structuring
The structuring of distributed operating system can be performed in three ways:
Monolithic Kernel: Distributed operating system is designed in the form of a monolithic kernel where the kernel contains all the services provided by operating system. A copy of huge kernel is running on all the machines of the system. The limitation of this approach is that most of the machines will not require most of the services but the kernel is still providing it.
Collective Kernel Approach: Operating system is designed as a collection of independent processes where each process represents some service such as distributed scheduling, distributed file system etc. The kernel consists of a nucleus of operating system called micro kernel which is installed on all the machines and provides basic functionalities. The micro kernel also provides interaction between services running on different machines example Galaxy, V-Kernel.
Object Oriented Kernel: All services of operating system are implemented in the form of objects. Each object encapsulate a data structure and also a set of operations for those data structure example Amoeba, CLOUDS.
More Articles …
Subcategories
Web Hosting
Web Hosting is a service offered by web hosting providers to the individuals and organizations to make their websites accessible on the internet. Depending on the requirement, one can avail different types of web hosting such as shared hosting, dedicated hosting, virtual private hosting, cloud hosting etc.
Page 70 of 193