Data Mining : an Overview

1.Introduction.

Every organization  accumulates huge volumes of data from  a variety of sources on a daily basis. Data Mining is an iterative process of creating predictive and descriptive models, by uncovering previously unknown trends and patterns in vast amounts of data from across the enterprise, in order to support decision making. Text mining applies the same analysis techniques to text-based documents. The knowledge gleaned from data and text mining can be used to fuel strategic decision making. During last decade a number of knowledge discovery systems were created which detect structure hidden in data in form of functional dependencies between attributes and formulate them as mathematical equations or other symbolic rules. One of most developed systems which can discover very complex and diverse equations, solves systematically problems of data error analysis and evaluates statistical significance of obtained results is designed to discovers empirical laws in data in form of functional programs constructed from standard and user-defined functional primitives. Although the systems which discover numerical dependencies in data use diverse knowledge representation formalisms and search methods they face the same set of difficulties inherent to their approach. Traditional document and text management tools are inadequate to meet the utilities. Document management systems work well with homogeneous collections of documents but not with the heterogeneous mix that knowledge workers face every day.

Even the best Internet search tools suffer from poor precision and recall.

2. An Architecture for Data Mining

To best apply these advanced techniques, they must be fully integrated with a data warehouse as well as flexible interactive business analysis tools. Many data mining tools currently operate outside of the warehouse, requiring extra steps for extracting, importing, and analyzing the data. Furthermore, when new insights require operational implementation, integration with the warehouse simplifies the application of results from data mining. The resulting analytic data warehouse can be applied to improve business processes throughout the organization, in areas such as promotional campaign management, fraud detection, new product rollout, and so on. Figure 1 illustrates an architecture for advanced analysis in a large data warehouse.
 

Figure 1 – Integrated Data Mining Architecture

The ideal starting point is a data warehouse containing a combination of internal data tracking all customer contact coupled with external market data about competitor activity. Background information on potential customers also provides an excellent basis for prospecting. This warehouse can be implemented in a variety of relational database systems: Sybase, Oracle, Redbrick, and so on, and should be optimized for flexible and fast data access.

An OLAP (On-Line Analytical Processing) server enables a more sophisticated end-user business model to be applied when navigating the data warehouse. The multidimensional structures allow the user to analyze the data as they want to view their business – summarizing by product line, region, and other key perspectives of their business. The Data Mining Server must be integrated with the data warehouse and the OLAP server to embed ROI-focused business analysis directly into this infrastructure. An advanced, process-centric metadata template defines the data mining objectives for specific business issues like campaign management, prospecting, and promotion optimization. Integration with the data warehouse enables operational decisions to be directly implemented and tracked. As the warehouse grows with new decisions and results, the organization can continually mine the best practices and apply them to future decisions.

2.1. The Scope of Data Mining

Data mining derives its name from the similarities between searching for valuable business information in a large database — for example, finding linked products in gigabytes of store scanner data — and mining a mountain for a vein of valuable ore. Both processes require either sifting through an immense amount of material, or intelligently probing it to find exactly where the value resides. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by providing these capabilities:

2.2. Capabilities:

Automated prediction of trends and behaviors. Data mining automates the process of finding predictive information in large databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data — quickly. A typical example of a predictive problem is targeted marketing. Data mining uses data on past promotional mailings to identify the targets most likely to maximize return on investment in future mailings. Other predictive problems include forecasting bankruptcy and other forms of default, and identifying segments of a population likely to respond similarly to given events.

Automated discovery of previously unknown patterns. Data mining tools sweep through databases and identify previously hidden patterns in one step. An example of pattern discovery is the analysis of retail sales data to identify seemingly unrelated products that are often purchased together. Other pattern discovery problems include detecting fraudulent credit card transactions and identifying anomalous data that could represent data entry keying errors.

Data mining techniques can yield the benefits of automation on existing software and hardware platforms, and can be implemented on new systems as existing platforms are upgraded and new products developed. When data mining tools are implemented on high performance parallel processing systems, they can analyze massive databases in minutes. Faster processing means that users can automatically experiment with more models to understand complex data. High speed makes it practical for users to analyze huge quantities of data. Larger databases, in turn, yield improved predictions. Databases can be larger in both depth and breadth:

More columns. Analysts must often limit the number of variables they examine when doing hands-on analysis due to time constraints. Yet variables that are discarded because they seem unimportant may carry information about unknown patterns. High performance data mining allows users to explore the full depth of a database, without preselecting a subset of variables.

More rows. Larger samples yield lower estimation errors and variance, and allow users to make inferences about small but important segments of a population.

A recent Gartner Group Advanced Technology Research Note listed data mining and artificial intelligence at the top of the five key technology areas that “will clearly have a major impact across a wide range of industries within the next 3 to 5 years.”2 Gartner also listed parallel architectures and data mining as two of the top 10 new technologies in which companies will invest during the next 5 years. According to a recent Gartner HPC Research Note, “With the rapid advance in data capture, transmission and storage, large-systems users will increasingly need to implement new and innovative ways to mine the after-market value of their vast stores of detail data, employing MPP [massively parallel processing] systems to create new sources of business advantage (0.9 probability).”3

3. The most commonly used techniques in data mining are:

Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.

Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID) .

Genetic algorithms: Optimization techniques that use processes such as genetic combination, mutation, and natural selection in a design based on the concepts of evolution.

Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k ³ 1). Sometimes called the k-nearest neighbor technique.

4.Text mining)™ Techniques

The key techniques of text mining include:

1. Feature extraction

2. Thema tic indexing

3. Clustering

4. Summarization

These four techniques are essential because they solve two key problems with applying text mining: they make textual information accessible, and they reduce the volume of text

that must be read by end users before information is found. Feature extraction deals with finding particular pieces of information within a text. The target information can be of a general form such as type descriptions or business of the former, while be pattern-driven. For example, applications analyzing merger and acquisition stories may extract names of the companies involved, cost, funding mechanisms and whether or not regulatory approval is required. Thematic indexing uses knowledge about the meaning of words in a text to identify broad topics covered in a document. For example, documents about aspirin and might be both classified under pain relievers or analgesics. Thematic indexing such as this is often implemented using multidimensional taxonomies. Taxonomy, in the text mining sense, is a hierarchical knowledge representation scheme. This construct, sometimes called ontology to distinguish it from navigational taxonomies such as Yahoo’s, provides the means to search for documents about a topic instead of documents with particular keywords. For example, an analyst researching mobile communications should be able to search for documents about wireless protocols without having to know key phrases such as wireless application protocol (WAP). Clustering is another text mining technique with applications in bus in  intelligence. Clustering groups similar documents according to dominant features. In text mining and information retrieval, a weighted feature vector is frequently used to describe a document. These feature vectors contain a list of the main themes or keywords along with a numeric weight indicating the relative importance of the theme or term to the document as a whole. Unlike data mining applications which use a fixed set of features for all analyzed items (e.g. age, income, gender, etc.), documents are described with a small number of terms or themes chosen from potentially thousands of possible dimensions. There is no single, best way to deal with document clustering; but three approaches are often used: hierarchical clusters, binary clusters and self-organizing maps. Hierarchical clusters [3] use a set-based approach. The root of the hierarchy is the set of all documents in a collection, and the leaf nodes are sets with individual documents. Intervening layers in the leaf nodes have progressively larger sets of documents, grouped by similarity. In binary clusters each document is in one and only one cluster, and clusters are created to maximize the similarity measures between documents in a cluster and minimize the similarity measure between documents in different clusters. Self-organizing maps (SOMs) use neural networks to map documents from sparse high-dimensional spaces into

Two-dimensional maps. Similar documents tend to the same position in the two dimensional grid. The last text mining technique is summarization. The purpose of summarization is to describe the content of a document while reducing the amount of text a user must read. The main ideas of most documents can be described with as little as 20 percent of the original text. Little is lost by summarizing. Like clustering, there is no single summarization algorithm. Most use morphological analysis of words to identify the most frequently used terms while eliminating words that carry little meaning, such as the articles the, an and a. Some algorithms weight terms used in opening or closing sentences more heavily than other terms, while some approaches look for key phrases that identify

5. Application Areas of TM

From government and legislative organizations, to corporations and universities, and to journalists, writers and college students, we all create, store, retrieve, and analyze texts. Hence, numerous organizations are faced with various document management and text analysis tasks. Consider a few simple examples: ·  Internet search engines could deliver much better quality results by accepting and  making sense of natural language queries. If documents found in response to a query were analyzed semantically for their relevance in the context of the original query, it could significantly increase the precision of the search: instead of finding a knockout amount of more than 10,000 documents in response to your query, the system could provide you with a short list of the most relevant documents. ·  Call center specialists have to understand customer support questions, quickly select relevant documents among available manuals, frequently asked questions lists, and engineering notes, and retrieve those bits of knowledge that help answer the question. An automated system for categorizing available materials and retrieving the most relevant fragments matching natural language questions could save hundreds of thousands of man-hours and dramatically reduce response time. Identifying the best fragments through thesauruses and anthologies could significantly improve recall, or the thoroughness of the search.  Lawyers, insurers and venture capitalists often have to quickly grasp the meaning of cases, claims and proposals, correspondingly. They need to improve the quality of querying the Web and diverse databases to find and retrieve relevant documents. Their practice could benefit tremendously from automated summarization of texts and feature extraction, when key points from the text are organized in a database holding meta-information to improve future access to knowledge contained in documents.  Researching medical journals for new hypotheses of cause and effect for a disease is an ideal case of what text mining ought to be able to do.  Intelligent Email Routing, Automatic Chat Rooms Monitoring, Web pages monitoring are all important appli

5.1. Grand challenges for text mining.

Text Mining is an exciting research area that tries to solve the information overload problem by using techniques from data mining, machine learning, NLP, IR and knowledge management. Text Mining involves the preprocessing of document collections (text categorization, information extraction, term extraction), the storage of the intermediate representations, the techniques to analyze these intermediate representations (distribution analysis, clustering, trend analysis, association rules etc) and visualization of the results. Here are some of the challenges that are facing the text mining research area:

5.2.Challenge 1: Entity Extraction.

 Most text analytics systems rely on accurate extraction of entities and relations from the documents. However, the accuracy of the entity extraction systems in some of the domains reaches only 70- 80% and creates a noise level which prohibits the adaptation of text mining systems by a wider audience. We are seeking domain independent and language independent NER (named entity recognition) systems that will be able to reach an accuracy of 99-100%. Based on such system, we are seeking domain independent and language independent relation extraction systems that will be able to

reach precision of 98-100% and recall of 95-100%. Since the systems should work in any domain they must be totally autonomous and require no human intervention.

5.3.Challenge 2: Autonomous Text Analysis.

Text Analytics systems today are pretty much user guided, and they enable users to view various aspects of the corpus. We would like to have a text analytics system which is totally autonomous and will analyze huge corpuses and come up with truly interesting findings that are not captured by any single document in the corpus and are not known before. The system can utilize the internet to filter findings that are already known. The “interest” measure which is totally subjective

will be defined by a committee of experts in each domain. Such systems can then be used for alerting purposes in the financial domain, the anti-terror domain, the biomedical domain and many other commercial domains. The system will get streams of documents from a variety of sources and send emails to relevant people if an “interesting” finding is detected. Based on systems developed in step 1 & 2, we would like to have (this is our text mining grand challenge)

6. Conclusion

Mining texts in different languages is a major problem, since text mining tools should be able to work with many languages and multilingual documents. Integrating a domain knowledge base with a text mining engine would boost its efficiency, especially in the information retrieval and information extraction phases. Acquiring such knowledge implies effective querying of the documents as well as the combination of information pieces from different textual sources (e.g.: the World Wide Web). Discovering such hidden know ledge is an essential requirement for many corporations, due to its wide spectrum of applications

7. References

1. Jochen Dorre, Peter Gersti, Roland Seiffert (1999), Text Mining: Finding Nuggets in Mountains of Textual Data, ACM KDD 1999 in San Diego, CA, USA.

2. Ah-Hwee Tan, (1999), Text Mining: The state of art and the challenges, In

proceedings, PAKDD’99 Workshop on Knowledge discovery from Advanced

Databases (KDAD’99), Beijing, pp. 71-76, April 1999.

3. Danial Tkach, (1998), Text Mining Technology Turning Information Into

Knowledge A white paper from IBM .

4. Helena Ahonen, Oskari Heinonen, Mika Klemettinen, A. Inkeri Verkamo, (1997),

Applying Data Mining Techniques in Text Analysis, Report C-1997-23,

Department of Computer Science, University of Helsinki, 1997

5. Mark Dixon, (1997), An Overview of Document Mining Technology,

http://www.geocities.com/ResearchTriangle/Thinktank/1997/mark/writings/dixm

97_dm.ps

ARSENIEV, S.B. & KISELEV, M.V. (1991)

The Object-Oriented Approach to the Medical Real Time System Design, Proceedings of MIE-91, In:

Lecture Notes in Medical Informatics, Springer-Verlag, Berlin, v.45, pp 508-512

FALKENHAINER, B.C. & MICHALSKI, R.S. (1990)

Integrating Quantitative and Qualitative Discovery in the ABACUS System In: Y.Kodratoff,

R.S.Michalski, (Eds.): Machine Learning: An Artificial Intelligence Approach (Volume III). San

Mateo, CA: Kaufmann, pp 153-190.

KISELEV, M.V. (1994)

PolyAnalyst – a Machine Discovery System Inferring Functional Programs, Proceedings of AAAI

Workshop on Knowledge Discovery in Databases’94, Seattle, pp 237-249.

KISELEV, M.V., ARSENIEV, S.B. & FLEROV E.V. (1994)

PolyAnalyst – a Machine Discovery System for Intelligent Analysis of Clinical Data, ESCTAIC-4

Abstracts (European Society for Computer Technology in Anaesthesiology and Intensive Care),

Halkidiki, Greece, p. H6.

LANGLEY, P., SIMON, H.A., BRADSHAW, G.L. & ZYTKOW, J.M. (1987)

Scientific discovery: Computational explorations of the creative processes. Cambridge, MA: MIT

Press.

Mr. Chandrakant R. Satpute.is a librarian in Godavari college of engineering Jalgaon Maharashtra. He carries with him 11 years experience of teaching &Librarianship. He has been associated with KLA.(khandesh Library Association) He has published six national and international paper. His area of interest of library automation &Digitization.

E-mail:

Both comments and pings are currently closed.

Comments are closed.