As the module progresses you will build a substantial data warehouse application for a real-world scenario of your choosing. You will design a star schema for the data warehouse. The data warehouse is a metaphor for multidimensional data storage. The actual physical storage of such data may differ from its logical representation. Assuming the data are stored in a relational database (Relational OLAP), you will create an actual data warehouse using either Microsoft Access, Microsoft SQL Server or Oracle, etc. Data warehouse can also be constructed through array-based multidimensional storage (Multidimensional OLAP). There is a capability of direct array addressing with this data structure, where dimension values are accessed via the position or index of their corresponding array locations.
Your first step is to identify the domain you would like to manage with your data warehouse, and to construct an entity-relationship diagram for the data warehouse. I suggest that you pick an application that you will enjoy working with –a hobby, material from another course, a research project, etc.
Try to pick an application that is relatively substantial, but not too enormous. For example, a data warehouse for a university consists of the following four dimensions: student, module, semester, and lecturer, and two measures count and avg_grade. When at the lowest conceptual level (e.g., for a given student, module, semester and lecturer combination), the avg_grade measure stores the actual module grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given combination. [Note: in your coursework, you should not use the university scenario or similar ones any longer!] Your data warehouse should consist of at least four dimensions, one of which should be time dimension, when expressed in the entity-relationship model, you might want your design to have one fact table plus four(or more) dimensional tables, and a similar number of relationships. You should certainly include one-many relationships. Each dimension has at least three levels (including all), such as student < course < university (all).
a) Describe the data warehouse application you propose to work with throughout the module. Your description should be brief and relatively formal. If there are any unique or particularly difficult aspects of your proposed application, please point them out. Your description will be graded only on suitability and conciseness. [2 marks]
b) [ROLAP] Draw a star schema diagram including attributes for your data warehouse. Don’t forget to underline primary key attributes and include arrowheads indicating the multiplicity of relationships. Write an SQL database schema for your PDA, using the CREATE TABLE commands ( Pick suitable datatypes for each attribute). UsingINSERT commands to insert tuples. You need to populate the data warehouse with sample data (at least five attributes for each dimensional table and at least three records each table) for manipulating the data warehouse. For this task, you ONLY need to submit the star schema diagram, and the populated tables. [5 marks]
c) [ROLAP] Starting with the base cuboid [e.g., student, module, semester, lecturer], carry out two OLAP operations. For example, what specific OLAP operations should you perform in order to list the average grade of the Data Mining module for each university student in the university scenario? Write and run an SQL query to obtain your list resembling the above example. Provide a screenshot as a proof your query worked. [6 marks]
d) [MOLAP] Use any of your favourite high-level languages, like C, C++, Java or VB, to implement a multi-dimensional array for your data warehouse. Populate your arrays, then perform the same operation as described in c). Compare solution c) with d) and resolve any differences. [8 marks]
e) [MOLAP] Unfortunately, this cube may often generate a huge, yet very sparse multidimensional matrix. Present an example illustrating such a huge and sparse data cube. Describe an implementation method that can elegantly overcome this sparse matrix problem.
Task 2: Choose one from the following three tasks. [15 marks in total]
a) Mining association rules over distributed databases
Review the algorithms mining association rules over distributed databases.
b) Mining classification over large databases
Review the algorithms mining classification over large databases (focusing on efficiency and scalability).
c) Mining cluster over large databases
Review algorithms mining cluster over large databases (focusing on performance. e.g. efficiency, scalability, able to deal with noise and outliers).
Task 3: [30 marks in total]
A database in .ARFF format has been provided for you on Studynet. Analyse this database using the WEKA toolkit and tools introduced within this module. Produce a report explaining which tools you used and why, what results you obtained, and what this tells you about the data. Marks will be awarded for: variety of tools used, quality of analysis, and interpretation of the results. An extensive report is not required (at most 4000 words), nor is detailed explanation of the techniques employed, but any graphs or tables produced should be described and analysed in the text. A reasonable report could be achieved by doing a thorough analysis using three techniques. An excellent report would use at least four tools to analyse the dataset, and provide detailed comparisons between the results.
You should perform the following steps:
1. Analyse the attributes in the data, and consider their relative importance with respect to the target class.
2. Construct graphs of classification performance against training set size for a range of classifiers taken from those considered in the module.
3. Analyse the data structure/representation generated by each classifier when trained on the complete dataset.
4. Combine the results from the previous three steps and all your classifiers to develop a model of why instances fall into particular classes.
Produce a report containing your answers to the above.
[Total 30 marks]
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more