UTArlingtonX: LINK5.10x Data, Analytics, and Learning or #DALMOOC (Week 3)

Before I start

This post is one of the longest I have written so far (it took my about a week to finish it), and I still have some open questions: 

  1. […] what determines the centrality of single nodes or clusters [when applying layouts in Gephi]? 
  2. […] what [does] the modularity class tell me about the community?
  3. Does this exemplify the importance of additional measures besides “degree”? As the number of connections does not necessarily correspond to the ability to serve as a bridge?
  4. Isn’t it logical when being part of a smaller sub cluster that closeness centrality and eccentricity decline as smaller groups are per se connected to fewer nodes in the network? 

These questions can be found in the text marked in bold where they are also embedded in the context to offer more information. I would be very happy, if people interested in Social Network Analysis (and Gephi) could help me finding the answers. Thank you in advance 🙂

Social Network Analysis (SNA) – that has a familiar ring!

When hearing about SNA in week 3, I remembered reading this term in some of the last weeks’ resources. Baker, R., & Siemens, G. (2014) frame SNA as one of four approaches of structure discovery of data in EDM/LA which can be seen as opposite to prediction (there is no priori idea of a predicted variable). SNA reveals the structure of interaction by analyzing the relationship between individual actors. Shum & Ferguson introduce SNA as a “possibility” offered by Social Learning Analytics (SLA). This possibility is social itself – as for example discourse analysis – but opposed to social learning disposition analytics or social learning content analytics, which need to be “socialized” first. So again, this sounds very social, and I am wondering if Social Learning Analytics and Social Network Analysis can combine three of my areas of interest: (Social) Psychology, Learning and Analytics.

SNA – What are we talking about?

Social Psychology – a friend I believed to be lost – is striking back as powerful and drawing interest as usual. Burt, Kilduff & Tasselli (2013) mention two facts established by Social Psychology that network models of advantage (“as a function of breadth, timing and arbitrage) build upon. These are

  1. people form groups based on where they meet
  2. within a group communication is more influencing and frequent than between different groups (as similar views develop).

As potential sources they mention Festinger et al. 1950 – which developed the idea of group cohesiveness and on page 151 of their book it says

The gist of these conclusions may be summarized as follows: In a community of people who are homogeneous with respect to many of the factors arising from the arrangement of houses are major determinants of what friendship will develop and what social groupings will be formed. These social groupings create channels of communication for the flow of information and opinions. Standards for attitudes and behavior relevant to the functioning of the social group develop, with resulting uniformity among the members of the group. Other people deviate because they were never in communication with the group.

This is a concept we can observe in our daily life. Imagine a university course starting and some students joining a little later than the rest. A loose group has already formed, the information flow is established. A new member joins this network and wants to put herself on equal footings with the others. Group cohesiveness represents “the property of a group that effectively binds people, as group members, to one another and to the group as a whole, giving the group a sense of solidarity and oneness.” (Hogg, M. A. & Vaughan, G. M. (2014), p288). In this respect one has to bear in mind that a lot of consequences result from cohesiveness, where positive and negative lie close to each other. In- and out-groups form, which can be positive for the in-group members. On the other hand, out-group members are excluded, the base for the emergence of prejudice. Many more social phenomena could be named here – something SNA may not overlook.

Haythornthwaite (1996) combines the social and analytics perspectives on Social Network Analysis in an interesting way. She mentions that compared to other analysis techniques SNA focuses on relationships and their patterns and contents. Thus it “strives to derive social structure empirically, based on observed relationships between actors, rather than on a priori classifications” (p325). The world is hence explained by networks, not by groups. Relationships and ties are described in relation to content, direction and strength (I will focus on the two latter ones).

Direction is asymmetrical when the information flow is one-way only and direct/undirect classifications represent if the direction of flow is either not measured/not considered relevant. In terms of strength (intensity of a relationship, in addition to the mere existence) either the number of ties and/or their strength can be examined. Haythornthwaite is introducing five network principles:

  • cohesion (grouping nodes regarding strong common relationships by e.g. density and centralization; clusters and cliques)
    • density (degree to which members are connected to all other members)
    • centralization (extend to which a set of actors are organized around a central point)
    • clusters (subgroup of highly interconnected actors)
    • cliques (fully connected clusters)
  • structural equivalence (grouping nodes regarding their similarity)
  • prominence (the node in charge)
    • centrality (differs from centralization as it measures the node’s connections in the network rather than measuring the configuration of the  network)
    • global centrality or closeness (shortest path between an actor and every other actor in the network)
  • range (a node’s network extent) and
  • brokerage (bridging connections to other networks)
    • betweenness (extent to which an actor sits between others in the network, playing a role as an intermediary)

So if we see SNA as the source of tools for the analysis of relational data (Grunspan, D. Z., Wiggins, B. L., & Goodreau, S. M. (2014)), we can detect two classes of hypothesis: why are relations formed and what are the outcomes of these relations? They also argue, that these questions are important, as the learner’s position seems to be correlated with her performance. So in order to understand networks, we need to understand the determinants, structure and consequences of relationship between actors. That needs to be considered in situations where social support / connections are to influence the outcomes of interest.

With this in mind, the main methods for SNA are modularity, density and centrality.

Modularity describes a way of quantifying the concept of community structure. In brief, it is calculated by subtracting the number of ties falling within groups subtracted from the expected ties in a similar network with nodes placed at random. [Newman, M. E. J. (2006)] This concept incorporates the idea that simply taking the number of ties between two groups one would expect at random is not meaningful until there is no comparison between expected numbers and actual present numbers.

Density in the words of Hanneman & Riddle (2005) is “the proportion of all possible ties that are actually present.” It is calculated by forming the sum of ties divided by the number of possible ties. In a fully connected network or subgroup of a network (a clique) would have the density of 1.

Network types can be described as unipartie (one type of actors) vs. bipartie (actors linked with the group to which they belong); undirected vs. directed (please see the Facebook network example below) as well as binary (simple existence) vs. valued ties (additional quantitative data). When talking about actor-level variables there are several proposed measurements of centrality existing: degree centrality (total number of connections a node has; in directional relation networks including in- and out-degree), betweenness centrality (actors serving as bridges in the shortest paths between two actors), closeness (how close one actor is to other actors in average) and eigenvector centrality (being connected to other well-connected nodes) (based on Grunspan, D. Z., Wiggins, B. L., & Goodreau, S. M. (2014)).

SNA in action – My Facebook network analysis in Gephi

I was surprised how easy it actually was to gather information around my Facebook network. And in addition, it felt weird. After receiving the data by running the Facebook app Netvizz (which creates a file that can be used by Gephi to analyse the network) I decided not to mention individual personal data to avoid drawing detailed conclusions from the below graphs. They are used to visualize the main methods introduced above and to briefly discuss some striking results of the analysis.

My Facebook network visualized in Gephi - First attempt
My Facebook network visualized in Gephi – First attempt

Gephi can be used for both – visualizing a network in the format of sociographs and identifying datasets of individual actors (nodes) and clusters in this dataset. The first step after running Netvizz is to integrate the dataset as a directed or an undirected network. As I am dealing with Facebook, the idea of a connection between two actors (a tie between two nodes) is that both agree to become Facebook friends. Thus, the direction is not relevant for the analysis. Hirst, T. (2010, April 16) describes this in his blog, comparing Facebook to Twitter, where direction would matter. As a result, I chose an undirected network for my analysis. From the import report we can see, that there are 320 actors (friends) in my network, connected by 3182 ties (connections).

Gephi import report for my Facebook network analysis
Gephi import report for my Facebook network analysis

After pressing the OK button, the network looks similar to the first graph of this section. It is an “imbroglio” of nodes and ties, not really interpretable. So it would be useful to apply some methods introduced above, starting with density.

The density has been calculated with 0,062. Strictly speaking, my Facebook network does not take advantage of its potential as it is not using all the possible connections. Whereas this might correspond to the basic interpretation of density, this is only applicable in a limited way here: there are many subgroups in my network that might have a higher density. In addition, one has to interpret the word “potential” here: yes, there are potential connections that could be made, but does that mean, they would change the quality, or the information flow of the network?

Applying nodes ranking degree (colour) in Gephi
Applying nodes ranking degree (color) in Gephi
Applying nodes ranking degree (size) in Gephi
Applying nodes ranking degree (size) in Gephi

For a more interpretable visualization, I applied measurements of centrality. The first one is degree centrality, offering the total number of connection a node has (as we are dealing with a indirectional relation network, there is no in- and out-degree). In my network, there is one actor with the highest degree centrality followed by three friends with a quite similar degree centrality – all of them are in my eyes well-connected in Facebook terms. But one has to be careful about making any qualitative judgements: degree centrality is offering an overview of connections quantity, not quality.

There are so many layouts in Gephi to apply for a better visualization but I have no idea of how they work in particular. In my case, I was applying Fruchterman Reingold (“a classical layout algorithm, since 1984; rated with 2/5 stars in quality, 3/5 in speed”). So the result looks like this:

Applying the Fruchterman Reingold layout in Gephi
Applying the Fruchterman Reingold layout in Gephi

From my understanding, the layout visualizes sub clusters of the network and arranges them dependend on how close they are to each other in a given area of a circle. [1] However, I am not quite sure what determines the centrality (here ment to be literally as being arranged more centrally in the area of the circle) of single nodes or clusters. What is striking in my Facebook network is that the formerly defined high-degree actors all belong to one cluster (the upper right one), and in addition we can find two more clusters. In general, it seems applicable to define these clusters as the “home-cluster”, the “work-cluster” and the “university cluster”.

Let’s go for the modularity now to identify more relation patterns in this network. The results are as follows: there are 23 communities. In the size distribution chart, the number of nodes (size) is related to the modularity class. [2] In this context, I am not sure, what the modularity class tells me about the community. The biggest community has about 90 nodes and a modularity class of 19. Whereas there are some small communities with only 1 node but a higher modularity classes.

Gephi modularity report for my Facebook network
Gephi modularity report for my Facebook network

To get rid of too small communities, my next idea was to filter out groups that are smaller than 3 nodes. (This can be done by setting the degree range filter in the topology folder and changing it to the range of 3-70). Running the density and modularity statistics again resulted in a density of 0,08 now (former: 0,062), 9 communities instead of 23 and a slighty lower modularity of 0,686.

Modularity report after filtering the degree range 3-70
Modularity report after filtering the degree range 3-70

Some additional measures and information: By filtering the nodes, the overall number was reduced to 281 actors and 3142 connections. The average degree of an actors is 22,363 and the network diameter 6 (meaning that the greatest path between two actors is covered by 6 nodes).

My Facebook network with the node partition modularity class and the nodes ranking (size) of betweenness centrality
My Facebook network with the node partition modularity class and the nodes ranking (size) of betweenness centrality

The partition function in Gephi is useful to visualize the 9 identified communities (I had to re-apply the layout, so now the position of the communities does not correspond completely to the former graphs, it is turned by 45 degree clockwise). In addition to this partition by modularity class I applied the betweenness centrality measure (actors serving as bridges in the shortest path between two actors). I am focusing on two actors here (the two biggest nodes, one light green, one light blue). Whereas the light blue one corresponds to one of the actors identified by the degree measure, the light green actors is a “new” one, not identified by the degree measure before. [3] Does this exemplify the importance of additional measures besides degree? As the number of connections does not necessarily correspond to the ability to serve as a bridge? In more quantified terms, betweenness centrality measures how often a node appears on shortest paths between nodes in the network. So actors identified by this measure could be valuable for transfering information from one group to the other. And believe it or not, the actor indentified by Gephi is a friend of mine that I always ask for “the gossip” and he/she knows almost everything about different groups – without being connected to most of the people in my network.

My Facebook network with the node partition modularity class and the nodes ranking (size) of closeness centrality
My Facebook network with the node partition modularity class and the nodes ranking (size) of closeness centrality
My Facebook network with the node partition modularity class and the nodes ranking (size) of eccentricity
My Facebook network with the node partition modularity class and the nodes ranking (size) of eccentricity

Two additional measures I applied can be seen in the graphs above: closeness centrality (as how close one actor is to other actors in average or the average distance from a given starting node to all other nodes in the network) and eccentricity (the distance from a given starting node to the farthest node from it in the network). The most striking fact is that there is no actor of the network explicitly standing out from the rest. However, in smaller communities the centrality seems to be lower with both measures. Should this be surprising? [4] Isn’t it logical when being part of a smaller sub cluster that closeness centrality and eccentricity decline as smaller groups are per se connected to fewer nodes in the network? 

Resources

Baker, R., & Siemens, G. (2014). Educational data mining and learning analytics. Cambridge Handbook of the Learning Sciences

Burt, R. S., Kilduff, M., & Tasselli, S. (2013). Social network analysis: foundations and frontiers on advantage. Annual review of psychology, 64, 527-547. doi: 10.1146/annurev-psych-113011-143828 (full text)

Festinger, L., Schachter, S., Back KW (1950). Social Pressures in Informal Groups. Stanford. CA: Stanford University Press.

Grunspan, D. Z., Wiggins, B. L., & Goodreau, S. M. (2014). Understanding Classrooms through Social Network Analysis: A Primer for Social Network Analysis in Education Research. CBE-Life Sciences Education, 13(2), 167–178. doi:10.1187/cbe.13-08-0162 (full text)

Hanneman, R. A. & Riddle, M.  (2005). Introduction to social network methods.  Riverside, CA:  University of California, Riverside (full text).

Hirst, T. (2010, April 16). Getting Started With The Gephi Network Visualisation App – My Facebook Network, Part I, Retrieved November 11, 2014, from http://blog.ouseful.info/2010/04/16/getting-started-with-gephi-network-visualisation-app-my-facebook-network-part-i/

Hogg, M. A. & Vaughan, G. M. (2014), Social Psychology, 7th Edition, Pearson Education

Newman, M. E. J. (2006). “Modularity and community structure in networks”. Proceedings of the National Academy of Sciences of the United States of America 103 (23): 8577–8696.

Shum, S. B., & Ferguson, R. (2012). Social Learning Analytics. Educational Technology & Society, 15(3), 3-26.

UTArlingtonX: LINK5.10x Data, Analytics, and Learning or #DALMOOC (Week 2)

Interweaving resources and competencies

In my last post about the #DALMOOC I described the course structure, its challenges and relected on the content. It is a nice structure but it does not contribute to readability. This time I will try a different approach. By pointing towards the three exclamation marks (in the figure below) as main themes of the post, I will interweave different resources, competencies and assignments.

The data cycle in data analytics

The data cycle adapted from Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist.
The data cycle adapted from Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist.

The data cycle is the main theme for week 2’s content of #DALMOOC. This data cycle (or data loop) consists of seven successive steps which are necessary to make learning data meaningful. Data Cleaning and integration can only take place after a potential data source has been identified and the data has been stored. The cleaning and integration part is especially important when it comes to combining different data sets as both need to be able to “communicate” with each other. The figure visualizes that the actual analysis of data is only the 5th step in the cycle – highlighting the importance of planning the analytics process. The action taken based on data relies mainly on the representation & visualization of it. By representing and visualizing them, data (or better: our analysis results) is “socialized” to ease the understanding of the analyzing results. [related to Competency 2.1: Describe the learning analytics data cycle]

No word on learning so far, but before diving deeper in the data cycle context I want to come back to my definition of learning analytics: “The field of Learning Analytics offers methods to analyse learners’ behavior in a learning environment and by this providing groundwork and significant potential for the improvement of learning environmentsand individual learner’s feedback and learning outcomes”  The bold parts have been added with regards to Shum, S. B., & Ferguson, R. (2012) [Social Learning Analytics. Educational Technology & Society, 15(3), 3-26]. I adapted my definition as the term “potential” points towards the possible future development of (S)LA and “learning outcomes” as it adds a significant part to what needs to be improved: the result of a learning process.

Shum & Ferguson also introduce the interesting concept of Social Learning Analytics (SLA). It is based on learning theories and pinpoints learning elements that are significant in a participatory online culture. Primarily, they acknowledge that learners are not learning alone but engaging in a social environment, where they can interact directly or their actions can be traced by others.

While this is a new context for thinking about analytics, we need to understand the possibilities offered by SLA, which are either social itself (e.g. social network analysis, discourse analytics) or can be socialized (e.g. social learning disposition analytics, social learning content analytics). Moreover the challenge of implementing these analytics is still present. Shum & Ferguson emphasize the impact of authentic learning from real-world context through the use of practical tools. Important features of a participatory online culture are the needs for a complementary open learning platform (“digital infrastructure”), the understanding of how open source tools can enable people to use the potential of these tools (“free and open”), the importance of SLA as a part of individual identity and credibility of skills (“aspirations across cultures have been shifting”), SLA as integral part of an employee’s toolkit (“innovation in complex, turbulent environments”) and analytics as a new form of trusted evidence (“role of educational institutions is changing”). Social learning adds a particular interest in the non-academic context.

What is left are still the challenges of powerful analytics: which measures do we use? Who is defining them? What do we want to measure? How do we define access rights? Are we focusing too much on by-products of online activity? How do we balance power?

Data cleaning

Ealier I was claiming that usability is not always what we need. Shum & Ferguson are stating “User-centered is not the same as learner-centered: what I want is not necessarily what I need, because my grasp of the material, and of myself as a learner, is incomplete”. When it comes to start working with a data set it is highly important that it is as complex and complete as possible. If not the data set itself, there is no other element in the cycle that has the potential to embody the complexity of the problem. It is the visualization that simplifies and “socialises” the data, not the data cleaning nor the integration and analysis.

That automatically calls for “quality data” [Siemens, G. (2013). Learning analytics: The emergence of a disciplineAmerican Behavioral Scientist.]. Siemens introduces this idea (besides others) with a quote from P. W. Anderson “more is different”. In my opinion, this in one of the key elements of learning analytics and/or analytics in general: we are afraid of complex data because at a certain degree we are not able anymore to process them. Instead of refusing data sets based on complexity, good analytics can help us (as a “cognitive aid” according to Siemens) to process them and make sense of them. Because data will not stop existing – by refusing to handle them we make our lives easier but we might ignore potential sources to understand our lives better. We failed with classification systems so now time has come for big data and algorithm. 

In addition to techniques of LA (as in Baker and Yacef 2009, adapted in my last post) Siemens mentions categories of LA applications. These are used for modeling user knowledge, behavior and experience. Thus, they can create profiles of users, model knowledge domains; perform trend analysis, personalization and adaptation. Siemens illustrates his point by giving the quote “A statistician may be more interested in creating probability models to identify student performance (technique), whereas a sociologist may be more interested in evaluating how social networks form based on technologies used in a course application.”

Data representation & visualization

There was a time when I had to work with SAP, data sets and visualization of these sets every day. And I am thankful for one lesson I learned during this time: Never change your data set to make it more illustrative. Keep the data set and base you visualization on it. That’s the reason I like Pivot tables so much. They have the potential to illustrate your analysis results, other users can adapt them and the data set will stay the same.

However, on has to keep in mind that analytic tools are programmed by others and to understand the way they work it is important to be familiar to the methods in use and how they are applied within such tools. During the DALMOOC we will work with different data analyzing tools. One of them is Tableau.

Tableau

[related to Competency 2.2: Download, install, and conduct basic analytics using Tableau software]

Tableau_Software_Logo_Small

Tableau is a software that helps you in visualizing your data (based on business intelligence approaches). It has an impressive range of options for this purpose, which is (from my point of you) easy to understand and to apply. Data can be imported in different formats and besides common bar charts one is able to create “interactive” dashboards, an arrangement of visualizations that other user can adapt via filter and that can show you more details of the data-set on demand.

However, making data fit into Tableau can be challenging. The data sets have to be formatted in a certain way so the software can work with them. That was what I faced when going a little beyond #Assignment56 from the DALMOOC assignment bank “Finding Data”. [As the title says, find data sources that can you used in Tableau (you will need to watch the Tableau videos for this week in order to become familiar with this). Look for educational data on government, university, or NGO sites. Create a blog post (or post to edX forum) on these data sources.]

Coming from Germany, I did some research on the Open data movement there. The Bundeszentrale für politische Bildung BPB (Federal Agency for Civic Education) offers an Open Data Dossier, where they describe the idea of Open Data, case studies and current running projects. In their words, the idea of Open Data is to make them available and usable for free. In this respect it is important to know the potential of data and data journalism for sustainable democratic development. Yet, this platform does not offer data sets itself but refers to a general framework for such a platform and to local pilot projects.

In this context they refer to the term “Hyperlokaljournalismus” that can be seen as opposite towards the classic idea of “Lokaljournalism”, offering very specific and detailed news in a dynamic way. They can be adopted to the location of the user and thus concentrate on immediate surroundings.

Three examples of Open Data platforms are daten.berlin.deoffenedaten.de and the “Statistisches Landesamt Rheinland-Pfalz“. Formats and range of data differ on each platform, but the idea is related to the statements of BPB: offer data for free and available for everyone. Nevertheless, browsing bigger institutions for data sets, it was mostly the visualization and not the data set that was available. For the data set, sometimes you had to download a form, sign it and describe the purpose you want to use the data for and then send it via telefax or e-mail. Why should I do this, when I am just looking for a data set for a individual analysis? I see the point that data collection can be very demanding and a researcher wants to protect his/her work. But when will we finally accept, that Open Data can contribute to a collective researcher community that works on data together? How do you enable easy data-check-ups if I have to send you an e-mail beforehand? And regarding the different data-set formats and integrating them in a certain tool: How long will we still need to clean data so they will fit in our analyzing tool? Will there be a future without the need to edit data formats? I do hope so.

Action

Although action was not a major theme in the course, I find this a very important part of the data cycle one has to consider. Particularly the action depends on data visualization so it is crucial to know whom we are visualizing our analyzing results for and what their needs are. This can be seen in organizational levels as well (“micro-, meso- and macroanalytics layers” according to Siemens) where we can have top-down and bottom-up approaches and the application of different questions and viewpoints. This emphasizes the directions LA can take and how they need to adapt according to the interest group they are serving. The main interest group – the learner and his/her need – however, might not be forgotten in this context. [related to Competency 2.3: Evaluate the impact of policy and strategic planning on systems-level deployment of learning]

Coming back to Siemens, he describes the challenges of LA (besides the data quality) as privacy and centered on human and social processes. I had the chance to touch upon the privacy topic during the Bazaar assignment, where I was in a group with him by chance. This again shows me the value of interactive tools when they are used in a suitable way.

Privacy is also one major topic Duval writes about in Duval, E. (2011, February). Attention please!: learning analytics for visualization and recommendation. In Proceedings of the 1st International Conference on Learning Analytics and Knowledge (pp. 9-17). ACM. She describes machine readable traces for attention and raises the question what should be measured to understand how learning takes place. In regard of privacy she touches upon what elements should be tracked and if the user should be aware of this. She refers to the “Attention-Trust” principles of property, mobility, economy and transparency. The term “filter-bubble” is introduced, as connected to the idea that filtering per se can be a threat and can anticipate user choices and ideas. This is somehow related to Shum & Ferguson’s user-/learner-centeredness as it is always a question of who sets the filter and how do the filter work.

What is missing?

I would love to spend more time with working in Tableau and other tools in this course. But I fear I cannot cover this within the given timeframe. So I will focus on the tool matrix and complete it while using the tools for basics explorations, doing the readings and dealing with other course resources.