Marinka Zitnik

Fusing bits and DNA

  • Increase font size
  • Default font size
  • Decrease font size
Marinka Zitnik

Embedded Business Intelligence in SP 2010

It has been a while since SP 2010 has been released and since I have been developing quite a lot on SP 2007 and am now exploring 2010 version, I feel moral duty :) to write something about this latest version as well. So I decided to write something about embedded business intelligence, which is not mentioned very often but I think it will become one of the integral part of SP.

First of all, embedded BI is a result of incorporation of the Performance Point Server as Performance Point Services. Before SP 2010, Performance Point Server was independant and separate product but now it isn't anymore. New Services enrich SP with KPI indices, scorecarding, matrices and much more that can easily be rendered as dashboards, charting webparts or consumed through Visio Services or used by numerous improvements to Excel Services.

Together with Performance Point Services, you get a Dashboard Designer, with which is possible to gui interface or hook up the data you you want to drive your scorecard or KPI off (e.g. conect it to SQL Service Anaylsis, easily create KPIs in designer and render them as webparts in sharepoint). As data mining and OLAP are getting more and more important BI technologies it is necessary to stress their benefits. First you are in control of what is happening with the data you have: data sources can be configured by admins  nd dashboards by department business units. Furthermore it is very easy to slice and dice the data to get the answers you are looking for. One among numerous new features is the decomposition tree -  we can drill into key notes and get more details in a very visual graphical way, which enriches the models from which we pull the data, so users can get quality answers quickly.

Few improvements are included in the Excel Services allowing users to publish and share bits of or whole workbooks, but still the owner has the total control of the services users consume.

There is another novelty worth mentioning, namely the Visio Services. It is simply a matter of creating a graph (e.g. network diagram or graph of used resources on the project) that is data bound and which then checkis real present data (e.g. changing pictures or states in accordance with the progress of the project). It is more of creating a simple user-friendly workflow that updates itself than a diagram. Do not confuse this with another powerful tool WWF - Windows Workflow Foundation to create complex workflows in .NET and VS.

 

Recognized as Google Anita Borg Scholarship Finalist

Yet another great news concering my (little) involvement with Google. I have written few weeks ago about being accepted to the Google Summer of Code 2011 with the project on matrix factorizations techniques in data mining for the Orange platform.

Nevertheless, Google has announced Google Anita Borg Scholarship Recipients and Finalists, a Scholarship for which I have applied this year and I am among 147 undergraduate and graduate students worldwide being chosen. Just for clarification - this is completely unrelated to the GSoC (the only common denominator being the Google itself), the scholarship however being awarded based on the strength of candidates’ academic performance, leadership experience and demonstrated passion for computer science.

Scholars from Europe have Scholars' Retreat at European Google centre at Zurich in June and I am very much looking forward to this event to meet some fascinating people. The retreat will include workshops, speakers, panelists, breakout sessions and social activities scheduled over a couple of days.

  • (Official Google Blog with published results of the Scholars's selection process) link
  • (Official Google Students Blog with published announcement of the Scholars's) link
  • (Faculty News) link in Slovene

 

Numerical Analysis of Matrix Functions

I have spent some time recently studying matrix functions, both from theoretical and computational perspective. There is a nice book by Nick J. Higham on functions of matrices, which I highly recommend to interested reader and which provides a thorough overview of current theoretical results on matrix functions and several efficient numerical methods for computing them. Another well written text is by Rajendra Bhatia on matrix analysis (graduate texts in mathematics), which includes topics such as the theory of majorization, variational principles for eigenvalues, operator monotone and convex functions, matrix inequalities and perturbation of matrix functions. Bhatia's book is more functional analytic in spirit, whereas Higham's book focuses more on numerical linear algebra.

Below you will find a report that I produced and which contains a few interesting (some are elementary) proofs and implementations of algorithms. Interested reader should check the literature above to be able to follow the text.

 

Fractal Dimension Computation Support in MF Library

I have always been fascinated by the world of fractals and have been deeply enthusiastic exploring the maths behind them. This post is announcing the support of the fractal dimension computation in the MF - Matrix Factorization for Data Mining library.

In the following paragraphs we shortly revise the most important concepts and definitions of the fractal dimension.

The embedding dimensionality of a data set is the number of attributes in the data set. The intrinsic dimensionality is defined as the actual number of dimensions in which the n m-dimensional original vectors can be embedded under the assumption that some distance in the reduced space is kept among them. Given a fractal as a self-similar set of points with r self-similar pieces, where each is scaled down by a factor of s, the fractal dimension D of the object is defined as

D = { log r} / { log s} .

Example: Sierpinski triangle (My seminar work for CG on L-systems - figure 1 in Appendix of the Report document) consists of three self-similar parts and each is scaled down by a factor of two, therefore its fractal dimension is D approx 1.58.

For the finite set of points in a vector space, we say the set is statistically self-similar on a range of scales(a, b)on which the self-similarity is true. However in the theory self-similar object should have infinitely many points because each self-similar part is a scaled-down version of the original object. As a measure of the intrinsic fractal dimension of a data set, the slope of the correlation integral is used. The correlation integral C(r) for the data set Sis defined as

C(r) = Count(dist(u, v) <= r; u in S, v in S, u != v).

Given a data set S which is statistically self-similar in the range(a,b), its correlation fractal dimension D is

D = {partial log C(r)} / {partial log r}, r in [a, b].

It has been shown that the correlation fractal dimension corresponds to the intrinsic dimension of a data set. Many properties hold for the correlation fractal dimension, see [1] and [2]. For us it is especially important, that the intrinsic dimensionality gives a lower bound on the number on attributes needed to keep the vital characteristics of the data set.

A fast algorithm for the computation of the intrinsic dimension of a data set presented in [2] is implemented in the MF - Matrix Factorization for Data Mining library. Intuitive explanation of the correlation fractal dimension is that is measures how the number of neighbor points increases with the increase of the distance. It therefore measures the spread of the data and the fractal dimension equal to the embedding dimension means that the spread of the points in the data set is maximum.

Of high importance is a Conjecture 1 in [1]: With all the parameters being equal, a dimensionality reduction method which achieves higher fractal dimension in the reduced space is better than the rest for any data mining task. Therefore correlation fractal dimension of a data set can be used:

  • for determining the optimal number of dimensions in the reduced space,
  • as a performance comparison tool between dimensionality reduction methods,
and all this can be done in way that is scalable to large data sets.

 

Recommended reading:

  1. Kumaraswamy, K., (2003). Fractal Dimension for Data Mining. Carnegie Mellon University.
  2. Jr, C. T., Traina, A., Wu, L., Faloutsos, C., (2010). Fast Feature Selection using Fractal Dimension. Science, 1(1), 3-16.

In [1] a concept of intrinsic fractal dimension of a data set is introduced and it is shown how fractal dimension can be used to aid in several data mining tasks. In [2] a fast O(n) algorithm to compute fractal dimension of a data set is presented. On top of that a fast, scalable algorithm to quickly select the most important attributes of the given set of n-dimensional vectors is described.

 

This year I am participating at Machine Learning Summer School (MLSS) that is held in Tübingen, Germany. The Summer School offers an opportunity to learn about fundamental and advanced aspects of machine learning, data analysis and inference, from leaders of the field. Topics are diverse and include graphical models, multilayer networks, cognitive and kernel learning, network modeling and information propagation, distributed M, structured-output prediction, reinforcement learning, sparse models, learning theory, causality and much more. I am looking forward to it. Also, posters are a long-standing tradition at the MLSS. Below is an image of a poster presentation that covers some of my recent work.

 

 
  • «
  •  Start 
  •  Prev 
  •  1 
  •  2 
  •  3 
  •  4 
  •  5 
  •  6 
  •  7 
  •  8 
  •  9 
  •  10 
  •  Next 
  •  End 
  • »


Page 1 of 25