Controlling the engine is the most processor- intensive job on your car, and the engine control unit (ECU)is the most powerfil computer in most cars. the ECU uses closed-loop control, a control scheme that monitors outputs of a system to control the inputs to a system, mananing the emissions and fuel economy of the engine (as well as a host of other perameters). Gathering data from dozens of different sensors, the ECU knows every thing from coolant temparature to the amount of oxygen in the exhaust. with this data, it performs millions of calculations each second, including looking upm values in tabels, calculating the results of long equations to decide the best spark and timing and determining how long nthe fuel injector io s open. The ECU does all of this to ensure the lowest emissions and best millage.
A modern ECU might contain a 32 bit, 40 MHZ processor. this may not compared to the 500 to 1 MHZ processor you probably have in your PC, but remember that the processor in your car is running much more efficient code then the one of yiur PC. the code is in average ECU takes up less than 1 megabite( MB) of memory. by comparision, you probably have atleast two gigabytes (GB) of programs on your computer --- thats 2000 times the amount in ECU.
Basic Parsing Technique
Here we discuss the two most common forms of the parser – operator precedence and recursive decent.operator precedence is especially suitable for parsing expression, since it can use information about the precedence and associativity of the operator to guide the parse. Reecursive descent parser uses a collection of mutually recursive routines to perform the syntax analysis.the great bulk of compilers in existence in the early 1970’s use one or both of these methods. A common situation is for operator precedence to be used for expression and recursive descent for the rest of the language.
The primary advantage of these methods is that they are easy to implement by hand. But there are drawbacks as well. Operator precedence has the curious property that if one is not careful, one can recognize inputs that are not in the language of the underlying grammar. Likewise, recursive descent, particularly when augmented with backtracking, can produce rather unexpected results.
Fortunately, there are two newer methods gaining popularity that are both more general than the older methods and more firmly grounded in grammar theory. The first of these methods, LL parsing, will be mentioned in this article, as it is really a table-based variant of recursive descent.
Parser
A parser for grammar G is a program that taken input as string w and produces as output either a parse tree for w, if w is a sentence of G, or an error message indicating that w is not a sentence of G often the parse tree is produced in only a figurative sense; in reality, the parse tree exist only as a sequence of actions made by stepping through the tree construction process.
This article discusses the operation of two basic types of parser for the context-free grammar-Bottom-up and Top-down. As indicated by their names, bottom-up parser build parse tree from bottom to the the root, while top-down parser starts with the root and work down to the leaves. In both cases the input to the parser is being scanned from left to right, one symbol at a time.
The bottom-up parsing method we discuss is called “shift-reduce” parsing because it consist of shifting input symbols onto a stack until right side of the production appears on the top of the stack. The right side may then be replace by (reduced to) the symbol on the left side of the production, and the process repeated.
Unfortunately, if A->XYZ is a production, then not every time that XYZ is on the top of the stack is it correct to reduce XYZ to A; there may be occasion where it Is necessary to continue to shift input symbols on the top of XYZ. Designing an algorithm from a grammar so that shift-reduce decisions are made properly is the fundamental problem of bottom-up parser construction.
SLR parsing
A problem with LL(1) parsing is that most grammars need extensive rewriting to get them into a form that allows unique choice of production. Even though this rewriting can, to a large extent, be automated, there are still a large number of grammars that can not be automatically transformed into LL(1) grammars.
LR parsers is a class of bottom-up methods for parsing that accept a much larger class of grammars than LL(1) parsing, though still not all grammars. The main advantage of LR parsing is that less rewriting is required to get a grammar in acceptable form for LR parsing than is the case for LL(1) parsing.
Furthermore, LR parsers allow external declaration of operator precedences for resolving ambiguity, instead of requiring the grammars themselves to be unambiguous.
we limit the discussion to SLR for the following reasons:
• It is simpler.
• In practice, LALR(1) handles only a few more grammars than SLR.
• When a grammar is in the SLR class, the parse-table produced by a SLR parser generator will be identical to the table produced by a LALR(1) parser generator.
• Understanding of SLR principles is sufficient to know how to rewrite a grammar rejected by a LALR(1) parser
If the input text does not conform to the grammar, there will at some point during the parsing be no applicable actions and the parser will stop with an error message. Otherwise, the parser will read through all the input and leave a single element (the start symbol of the grammar) on the stack. LR parsers are also called shift-reduce parsers. As with LL(1), our aim is to make the choice of action depend only on the next input symbol and the symbol on top of the stack.
Conflicts in SLR parse-tables
When reduce actions are added to SLR parse-tables, we might add one to a place where there is already a shift action, or we may add reduce actions for several different productions to the same place. When either of this happens, we no longer have a unique choice of action, i.e., we have a conflict. The first
situation is called a shift-reduce conflict and the other case a reduce-reduce conflict. Both may occur in the same place. Conflicts are often caused by ambiguous grammars, but (as is the case for LL-parsers) even some non-ambiguous grammars may generate conflicts. If a conflict is caused by an ambiguous grammar, it is usually (but not always) possible to find an equivalent unambiguous grammar. But even unambiguous grammars may in some cases generate conflicts in SLR-tables.
In some cases, it is still possible to rewrite the grammar to get around the problem, but in a few cases the language simply is not SLR. Rewriting an unambiguous grammar to eliminate conflicts is somewhat of an art. Investigation of the NFA states that form the problematic DFA state will often help identifying the exact nature of the problem, which is the first step towards solving it. Sometimes, changing a production from left-recursive to
- Add the production S0 !S, where S is the start symbol of the grammar.
- Make an NFA for the right-hand side of each production.
- If an NFA state s has an outgoing transition on a nonterminal N, add epsilon-transitions from s to the starting states of the NFAs for the righthand sides of the productions for N.
- Convert the combined NFA to a DFA. Use the starting state of the NFA for the production added in step 1 as the starting state for the combined NFA.
- Build a table cross-indexed by the DFA states and grammar symbols (terminals including $ and nonterminals). Add shift actions for transitions on terminals and go actions for transitions on nonterminals.
- Calculate FOLLOW for each nonterminal. For this purpose, we add one more start production.
- When a DFA state contains an accepting NFA state marked with production number p, where the nonterminal for p is N, find the symbols in FOLLOW(N) and add a reduce p action in the DFA state at all these symbols. If production p is the production added in step 1, add an accept action instead of a reduce p action.
right-recursive may help, even though left-recursion in general is not a problem for SLR-parsers, as it is for LL(1)-parsers.
Office automation systems are computer based information systems that collect, process, store and transmit electronic messages, documents and other forms of communications among individuals, work groups and organizations. Such systems can increase the productivity of managerial end users and other professional and staff personnel by significantly reducing the time and effort needed to produce, access and receive business communications.
An overview of office automation systems are given below:
1) Office publishing systems are word processing and desktop publishing systems.
2) Image processing systems are image processing, optical scanning and storage, document management and interactive video systems.
3) Electronic communication systems are electronic meeting, tele-conferencing and telecommunicating systems.
Office support systems are electronic calendar, ticket file, notebook, directory system, work scheduling and task management systems. The above are described in brief:
Office publishing systems
Word processing – It is the use of computer systems to automate the transformation of ideas into a readable form of communication. It involves manipulating text data to produce information products in the form of documents (letters, memos, forms and reports).
Desktop publishing – Organisations can use and develop publishing systems to produce their own printed materials. They can design and print their own newsletters, manuals, brochures and books with several types of styles and graphics and colours on each page.
Voice store and forward systems – It is also known as voice mail or structured verbal-response exchanges. It has the capability of storing verbal data and/or communicating it to another location. It is one of the latest systems and found useful to corporate houses and the society at large.
Electronic Meeting systems – If electronic meeting systems are used, people may not appreciate spending money on travel to attend meetings away from their normal work location. It involves the use of video and audio communications to allow conferences and meetings to be held with participants who may be scattered across a room, a building or a country. It saves employee’s time, increase productivity, reduce total expenses and energy consumptions. This is also promoted in the Group decision support systems, because they promote more efficient and effective decision-making by various groups of people.
Teleconferencing – Participants at remote sites key in their presentation and responses whenever convenient for their online terminals or workstations which are connected to a central conference computer. Since all participants don’t have to do this at the same time, this form of EMS is called computer’s conferencing. Sessions are held in real time with major participants at remote site who take part with voice input of questions and responses.
Telecommuting – It is the use of telecommunications by workers to replace commuting enabling them to work from their homes. It is also used to describe the use of telecommunication to carry on work activities from temporary locations other than offices and homes. Some people consider telecommuting as the creation of virtual offices. They use electronic mail and voice mail to communicate with each other about job assignments.
Multimedia – Another evolving set of technologies for systematically managing graphics, drawings and photographs in addition to text and data are multimedia data management systems. Some of these are called hypermedia systems. These systems use computers and information technology to incorporate text, data, graphics, video and audio within a single application.
Multimedia pertains to technologies that facilitate the integration of two or more types of media such as text, graphics, sound, voice, full motion video or animation into a computer based application. Since 1990s, multimedia will be the foundation of new consumer products and services such as electronic books, newspapers, electronic classroom presentation technologies, full motion video conferencing, imaging, graphics design tools and video electronic and voice mail.
Once a multimedia application is developed, users can press a key on the keyboard or a button on a mouse to retrieve loads of text information. Video clips can be run with the press of another button. Other buttons can activate related narration or music via an attached sound system. In multimedia databases, users can obtain data and information about an entry in a variety of formats including text, data, still photos and audio and video clips. Multimedia databases and applications are rapidly growing in popularity. Multimedia is providing powerful tools for sales presentation. Honeywell Inc. at Minneapolis uses video and audio on laptop computers to describe its products and services. By providing all its 600 sales representatives with the same multimedia presentation using lap-tops, the company found it could promote its products in a more consistent manner.
Organisational applications for multimedia are manly because multimedia is likely to be a major instrument for obtaining corporate information. If multimedia and telecommunication technologies are combined, desktop-to-desktop video conferencing and file sharing become feasible and enable people to change across organization to meet and interact. All future computers, regardless of size are expected to have built in multimedia capabilities combining existing text and numbers with music, full motion and still frame video, animation, voice messages, telephones and fax capabilities. The future will see steady and impressive progress towards faster chips at lower cost and microprocessors with the power of today’s mainframes or supercomputers.
Image processing systems
These represent a final way, on the basis of which companies work with photographs, graphs and document images that can be handled easily by a traditional database. With multimedia systems, the user can be provided with information in a variety of formats, whereas image based database are sufficient for users in many applications. Advances in optical storage and document scanning technologies have made these attractive options for many organizations. It is possible to create and store documents that contain an image or photograph of a source document. It can include the picture of an employee in an employee file or assist inventory clerks to identify stock items.
Image processing systems, also known as electronic image management system, allow users to electronically capture, store, process and retrieve images of documents. Image database management systems are becoming more versatile. The LAN-based image processing systems are also being common, which include several servers each dedicated to a specific function.
Document management systems
These systems are sophisticated image processing systems. These often interface with other office systems such as word processing, desktop publishing, electronic mail, voice mail etc.
These systems allow digitized ‘voice notes’ to be attached to specific document images and may have other special features as well as the ability to store multi-colour documents and colour photographs.
The Automated Office
The goal is to make the office more efficient and effective. In implementing office automation, one has to eliminate time wasting activities and avoid duplication of effort. To achieve this goal, data must never be executed more than once, must be able to communicate the data to the appropriate persons or location. The modern office may be automated through the use of data processing, data communications, word processing, electronic mail and voice store and forward systems etc.
Office automation is defined as a separate sub-system or included within information processing. It includes a wide range of support facilities for knowledge work and activities. When first introduced, it was typically referred only to be replacement of typewriters by word processors. Now office automation is highly inter-related with data processing and telecommunications. Therefore, there is an organizational theory basis for assigning all three functions to a single organizational function, namely information systems or information resources. LAN and WAN are key components for integrating office automation functions and providing access to data processing facilities.
As more white collar jobs require access to computers and communication technology, other physical materials become less important. Electronic and voice mail extend the capability of the telephone to contact another person without being physically present. Information technology removes the physical constraints of being ‘in the office 9 to 5’. This location and time independence of office jobs permit a greater number of options for physical location and organizational facilities.
Three possibilities are in various stages of discussion and experimentation: Satellite work centers, neighborhood work centers and work at home. In the first one, employee may go to the satellite center, which is equipped with terminals, printers and teleconferencing facilities rather than commuting to the central office. The neighbourhood work centre shares the facility to the employee of many organizations. The third one is to work at home, is not significant because managers feel uncomfortable supervising employees. They can not see and also many employees prefer the social interaction the office provides rather than isolation of being at home.
A Data warehouse turns raw information into a useful analytical tool for business decision making. The fundamental question that gets addressed in about 1000 firms is: “Which customers are buying or using what products, services, when and where?”. If you know the answer to that question, then you are able to guide your business strategically.
Transaction processing systems can also play a strategic role in gaining competitive advantages for a business. Many firms are using the internet, extranets and other networks that tie them electronically to their customers or suppliers for real time or online transaction processing (OLTP). It is also a real time transaction processing system.
Companies often keep raw information in online transaction processing systems, which track day to day operations such as- each sale, purchase and inventory change. But OLTP systems are not well suited for answering questions that affect the past, present and future direction for a business question like: What are the historical trends in unit costs vs. growth in sales to customers in Orissa?
To answer those kinds of questions, a company needs an analysis system with the ability to perform ad hoc queries and create specialized reports. The raw material for analysis is a combined view of all the relevant data a company has – a data warehouse. The warehouse stores information from OLTP systems and other sources of raw data are external systems.
Metadata blueprints: Because information is coming from many sources, each with its own view of the data, a company must create an enterprise level data model to have a consistent view of its information. This metadata is the blueprint for the pieces of the data warehouse architecture.
The process of transforming raw data into a data warehouse involves several steps: extraction, consolidation, filtering, cleansing, conversion and aggregation. This process is collectively known as data warehouse generation. Generation is at the heart of the warehouse infrastructure, most of the efforts in a data warehouse project is spent on this process. Doing it right means the difference between finding answers that are valuable and answers that are useless. Here is what each of the steps involves:
Extraction – This step involves taking the data out of its original database and transforming it to the data warehouse infrastructure. Companies often place restrictions on what is extracted. For example, the extraction process may occur every day, so any changes to the raw data sources older than 24 hours are ignored.
Consolidation – It is the process of combining data from several sources into one database. To get a complete view of a customer, a company may consolidate data from older entry systems, sales contact database and technical support databases.
Filtering - Not every piece of data is needed. For example, a company may want to know which products customers have ordered but do not need the confirmation number used to process a sale. Filtering picks out the relevant data and removes duplicate entries.
Cleansing - The quality of an answer is only as good as the quality of data used to derive that answer, so it’s important to cleanse the data to improve the accuracy of the data in the warehouse. The classic example of poor quality data is a single customer with multiple entries. It requires some intelligence on part of the cleansing software to identify and correct such data.
Conversion – Conversion is also called translation, means mapping raw data onto new data fields within the warehouse data model and translating the data into the format used by the warehouse. For example, the original supplier data may count widgets by the gross, while manufacturing data tracks widgets individually. In a data warehouse, the units of measure must be same in order to get useful analysis results.
Aggregation – Often the value of a data warehouse is in the summarized data and derived data it contains, as opposed to the raw data stored in OLTP sources. The aggregation step sorts and combines data into useful metrics for analysis. For example, while the raw data may track individual orders by individual customers, a more useful measure of sales might be orders of a particular product family. The aggregation process generates these new calculated sales numbers.
It is a large centralized storage area such as a database. When an organization centralizes the storage off data, it is said to be putting the data into a data warehouse. These warehouses contain millions of pieces of information about customer behaviour and demographics, and they are starting to contain information about other personal traits and behaviours. Even though the phase ‘warehouse’ implies that the data is shelved, infact the storage spaces are often nothing more than the servers that contain large arrays of disk for storing information.
The scalable data warehouse framework is a complete view of data warehousing. To grasp data warehousing fully, it is important to understand the data warehousing process. Data warehouses, in essence, store and access data supplied by OSSs (Open System Specification) and provide data for tools and applications.
The source data that will populate the data warehouse solution will come from OSSs. There are two major categories of operation systems: network systems, including planning and engineering, provisioning, network management and trouble/repair and business operations systems, including customer care, billing and directions.
A data warehouse will deliver network planning and analytical processes to support network asset optimization, provide cross-geographical views of critical network data and deliver the integration of both customer and network data to improve and enhance profitability. This solution will help communications service providers manage the evolution of their networks so that it is robust enough to support new, diverse services in a cost effective manner while addressing capacity and planning issues for basic network services.
As one might expect, the heart of a data warehouse is the data itself. The type and quantity of the data stored changes almost on a daily basis. Change is inherent to the data warehouse. This data changes as the business grows, as operational data changes, as the business question change, as the number of users changes and as the applications used to access the data change.
If a warehouse is built around a database engine that cannot handle its dynamic nature, failure is inevitable. This failure could force users to limit the questions they ask, result in a long delay for the return of queried data, and ultimately require the construction of separate, independent systems. Finally, a failure could necessitate the use of summarized data instead of detailed data and providing detailed data is quite possibly the most important role of the data warehouse. The capture of details data enables businesses to drill down and perform analysis on subject oriented data and get insight into the entire business. Most importantly businesses will learn from their mistakes and successes.
Data Marts and Operational Data Stores
A data mart is specialized set of business information focusing on a particular aspect of the enterprise, such as a department or business process. The information is a data mart often comes from several different raw data systems. Many companies choose to feed a data mart from a data warehouse because the in the warehouse has already been consolidated and processed from the same raw data.
An operational data store is a hybrid of an OLTP system and an analytical system. It contains information that is frequently updated in an ad hoc basis, often in response to changes in the OLTP system, as opposed to the scheduled updates of a data warehouse. The data within an operational store mirrors some of the data within the OLTP system. It has been extracted from the OLTP system and transformed and aggregated to a limited extent. Its purpose is to provide an operational level query system that won’t affect the performance of the raw OLTP systems. Databases used for transaction processing are designed to update thousands of records per second but are not designed for sophisticated querying. Data warehouse databases are designed to analyse terabytes of data and billions of records. They are organized to better allow analysis using special techniques.
Online Transaction processing involves real time transactions. It has been recognized that this data combined with current data, contains an enormous amount of information from which one can discover trends that would never be seen on a day-to-day or month-to-month basis. To make this data more useful, it is now being stored in a separate database called a data warehouse.
The Document Object Model (DOM)
The Document Object Model, unlike SAX, has its origins in the World Wide Web Consortium (W3C). Whereas SAX is public-domain software, developed through long discussions on the XMLdev mailing list, DOM is a standard just as the actual XML specification itself is. The DOM is also not designed specifically for Java, but to represent the content and model of documents across all programming languages and tools. Bindings exist for JavaScript, Java, CORBA, and other languages, allowing the DOM to be a cross-platform and cross-language specification. In addition to being different from SAX in regard to standardization and language bindings.
DOM is organized into " levels" instead of versions. DOM Level One is an accepted Recommendation, a Level One details the functionality and navigation of content within a document. A document in the DOM is not just limited to XML, but can be HTML or other content models as well! Level Two, which should finalize in mid-2000, adds upon Level One by supplying modules and options aimed at specific content models, such as XML, HTML, and Cascading Style Sheets (CSS). These less-generic modules begin to "fill in the blanks" left by the more general tools provided in DOM Level One.
The DOM and Java
- Using the DOM for a specific programming language requires a set of interfaces and classes that define and implement the DOM itself. Because the methods involved are not outlined specifically in the DOM specification, and instead the model of a document is focused upon, language bindings must be developed to represent the conceptual structure of the DOM for its use in Java or any other language. These language bindings then serve as APIs for us to manipulate documents in the fashion outlined in the DOM specification.
- We are obviously concerned with the Java language binding. The classes you should be able to add to your IDE or class path are all in the org.w3c.dom package (and its subpackages). However, before downloading these yourself, you should check the XML parser and XSLT processor you purchased or downloaded; like the SAX package, the DOM package is often included with these products. This also ensures a correct match between your parser, processor, and the version of DOM that is supported.
- Most processors do not handle the task of generating a DOM input themselves, but instead rely on an XML parser that is capable of generating a DOM tree. For this reason, it is often the XML parser that will have the needed DOM binding classes and not the XSLT processor. In addition, this maintains the loose coupling between parser and processor, letting one or the other be substituted with comparable products. As Apache Xalan, by default, uses Apache Xerces for XML parsing and DOM generation, it is the level of support for DOM that Xerces provides that is of interest to us.
Getting a DOM Parser
One thing that the DOM does not specify is how a DOM tree is created. The specification instead focuses on the structure and APIs for manipulating this tree, which leaves a lot of latitude in how DOM parsers are implemented. Unlike the SAX XMLReader class, which dynamically loads a SAX XMLReader implementation, you will need to import and instantiate your vendor's DOM parser class explicitly. To begin, create a new Java file and call it DOMParserDemo.java. We will look at how to build a simple DOM parsing program to read in an XML document and print out its contents. Create the structure and skeleton of your example class first, as shown in Example B.
Example B. DOMParserDemo Class
// Import your vendor's DOM parserimport org.apache.xerces.parsers.DOMParser;
/**
* DOMParserDemo will take an XML file and display
* the document using DOM
*/
public class DOMParserDemo {
/**
* This parses the file, and then prints the document out
* using DOM.
* @param uri String URI of file to parse.
*/
public void performDemo(String uri) {
System.out.println("Parsing XML File: " + uri + "\n\n");
// Instantiate your vendor's DOM parser implementation
DOMParser parser = new DOMParser( );
try {
// parser.parse(uri);
} catch (Exception e) {
System.out.println("Error in parsing: " + e.getMessage( ));
}
}
/**
* This provides a command-line entry point for this demo.
*/
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Usage: java DOMParserDemo [XML URI]");
System.exit(0);
}
String uri = args[0];
DOMParserDemo parserDemo = new DOMParserDemo( );
parserDemo.performDemo(uri);
}
}
- This is set up in a fashion similar to our earlier SAXParserDemo class, but imports the Apache Xerces DOMParser class directly and instantiates it. We have commented out our actual invocation of the parse( ) method for the moment; before looking at what is involved in parsing a document into a DOM structure, we need to address issues of vendor neutrality in our choice of parsers. Keep in mind that this is simple and works great for many applications, but is not portable across parser implementations as our SAX example was.
- The initial impulse would be to use Java constructs like Class.forName(parserClass).newInstance( ) to get an instance of the correct vendor parser class. However, different DOM implementations behave in a variety of fashions: sometimes the parse( ) method returns an org.w3c.dom.Document object (which we look at next); sometimes the parser class provides a getDocument( ) method; and sometimes different parameter types are required for the parse( ) method (InputSource, InputStream, String, URI, etc.) to be supplied with the URI. In other words, while the DOM tree created is portable, the method of obtaining that tree is not without fairly complex reflection and dynamic class and method loading.
Subcategories
Web Hosting
Web Hosting is a service offered by web hosting providers to the individuals and organizations to make their websites accessible on the internet. Depending on the requirement, one can avail different types of web hosting such as shared hosting, dedicated hosting, virtual private hosting, cloud hosting etc.
Page 158 of 193