Exercises

The exercises are intended to get you comfortable using the network software. You do not need to hand them in, although if you do I will be happy to comment on them. 

General reminder: Make sure that when you start UCINET your default folder (on the bottom of the screen) is set to the ucinet data folder, normally: C:\Users\<yourusernamehere>\Documents\UCINET Data


  • Relations among Relations relations among relations.docx
    Posted Mar 27, 2017, 4:55 PM by Steve Borgatti
  • Centrality Exercise Centrality exercise from 2005. May need adjusting for modern UCINET. [pdf]
    Posted Mar 29, 2015, 2:48 PM by Steve Borgatti
  • Visualization Exercise This is a tutorial/exercise that combines data importation, basic data manipulation, and visualization.exercise
    Posted Feb 23, 2015, 6:20 PM by Steve Borgatti
  • Cohesive Subgroups Lab For this lab we will use three datasets:             KAPTAIL: This is a stacked dataset containing four dichotomous matrices.  There are two adjacency matrices each for social ties (indicating the pair ...
    Posted Apr 10, 2014, 6:11 PM by Adam Jonas
  • Brokerage & Local Position Lab For this lab we will use only one network and one attribute dataset:             KRACK-HIGH-TECH (KHT) & HIGH-TEC-ATTRIBUTES (HTA) This is a dataset collected by David Krackhardt from ...
    Posted Mar 24, 2014, 8:09 PM by Adam Jonas
  • Exercise 7. Characterizing whole networks For this exercise we will use four datasets:             CAMPNET: This is a dichotomous adjacency matrix of 18 participants in a qualitative methods class.  Ties are directed and represent that the ...
    Posted Mar 2, 2014, 6:42 PM by Steve Borgatti
  • Exercise on matrices [This exercise is for the centrality week]Please find attached an exercise on adjacency matrices, matrix notation and eigen values and eigen vectors.
    Posted Mar 2, 2014, 12:09 PM by Steve Borgatti
  • Exercise 6: Network Hypotheses Hypothesis Testing Lab   For this lab we will use four datasets:             CAMPNET: This is a dichotomous adjacency matrix of 18 participants in a qualitative methods class.  Ties are directed and ...
    Posted Feb 25, 2014, 2:00 PM by Adam Jonas
  • Exercise 5: Visualization Load the Krack-high-tec dataset into NetDraw Load the Krack-high-tec attributes file into NetDraw, “high-tec-attributes” Use the lightning bolt to apply a graph layout algorithm ...
    Posted Feb 17, 2014, 10:18 AM by Jesse Fagan
  • Exercise 4. Data ArtocoinfoFrom the maydata.xls file (a standard UCINET file but also available here), grab valued network contained in the artcoinfo tab and import it into ucinet. Call the result ...
    Posted Feb 7, 2015, 2:32 PM by Steve Borgatti
  • Exercise 3: Data Collection In class, we will construct a whole network survey with Qualtrics, administer it, and then read the data into Ucinet for analysis. For the surveyGender, position (student/faculty), college ...
    Posted Jan 31, 2016, 6:22 PM by Steve Borgatti
  • Exercise 2b: Matrix Multiplication For general instructions on completing the exercises, visit the Exercises page.In this exercise, we will be using matrix multiplication. The best way to do that in UCINET is to ...
    Posted Jan 25, 2015, 12:36 PM by Steve Borgatti
  • Exercise 2a: Graph-Theoretic Concepts For general instructions on completing the exercises visit the Exercises page.For this exercise we will use datasets CAMPNET and ZACKAR. CAMPNET. This is a network of 18 participants in ...
    Posted Jan 25, 2015, 12:20 PM by Steve Borgatti
  • Exercise 1. A few basics For general instructions on completing the exercises visit the Exercises page.Set ucinet's default folder to the folder containing all of the ucinet standard datasets. Unless you have a ...
    Posted Jan 25, 2015, 12:16 PM by Steve Borgatti
Showing posts 1 - 14 of 14. View more »


Relations among Relations

posted Mar 27, 2017, 4:54 PM by Steve Borgatti   [ updated Mar 27, 2017, 4:55 PM ]

Centrality Exercise

posted Mar 29, 2015, 2:46 PM by Steve Borgatti   [ updated Mar 29, 2015, 2:48 PM ]

  • Centrality exercise from 2005. May need adjusting for modern UCINET. [pdf]

Visualization Exercise

posted Feb 23, 2015, 6:20 PM by Steve Borgatti

This is a tutorial/exercise that combines data importation, basic data manipulation, and visualization.

Cohesive Subgroups Lab

posted Apr 10, 2014, 6:11 PM by Adam Jonas

For this lab we will use three datasets:

            KAPTAIL:

This is a stacked dataset containing four dichotomous matrices.  There are two adjacency matrices each for social ties (indicating the pair had social interaction) and instrumental ties (indicated the pair had work-related interaction).  The two pairs of matrices represent two different points in time.  The names of the datasets encode the type of tie in the sixth letter, and the time period in the seventh.  Thus, the dataset KAPFTS1 is social ties at time 1 and KAPFTI2 is instrumental ties at time 2, etc.

ZACKAR & ZACHATTR:

ZACKAR is another stacked dataset, containing a dichotomous adjacency matrix, ZACHE, which represents the simple presence or absence of ties between members of a Karate Club, and ZACHC, which contains valued data counting the number of interactions between actors.  ZACHATTR is a rectangular matrix with three columns of attributes for each of the actors from the ZACKAR datasets.

            PV504

PV504 is a 504-actor network of consultants working for an R&D consulting firm.  The data are symmetric and valued and represent the number of days that pair of individuals worked on a project together.


EXERCISES:

1)                 Hierarchical Clustering using UCINET with ZACKAR

a)     This section uses the ZACHE dataset (you may have to unpack ZACKAR using Data | Unpack to create ZACHE) and the ZACHATTR attribute dataset


b)     Now run using the HiClus using the SIMPLE_AVERAGE method.  Interpret your results.  Do you know why you might use SIMPLE_AVERAGE over other methods?  It isn't necessary, but you may want to experiment with the other methods to see what works and what doesn't.


 

2)                 Newman-Girvan using NetDraw with ZACKAR

a)     Open the ZACKAR stacked dataset in NetDraw.  It should open to displaying the relation ZACHE but if not, make sure it does. 

b)     Now, open the attribute file, ZACHATTR, using the folder with the A next to it. 

c)      Run the Girvan-Newman analysis (Analysis | Subgroups | Girvan-Newman) specifying a minimum of 2 and a maximum of 40 clusters desired. It should automatically color your nodes so that nodes are one of two colors.  What it has done behind the scenes is color based on the ngPart_2 partition (a partition with 2 colors).  Click on the color palette icon and pull down on the drop down list to select ngPart_3 to see how it partitions it next.  And then ngPart_4.  How useful are these partitions?

d)     Using the color palette, go back to the ngPart_2 partition.  Now, click on the shape palette icon, and select “Club” from the list.  This will shape the nodes according to which club the members went to after the split.  How well did the Girvan-Newman algorithm predict the affiliation of the club members?

 

3)                 Factions using Netdraw with ZACKAR

Now run Analysis | Subgroups | Factions selecting 2 for the desired number of groups.  This time, instead of using the color palette, use the “Nodes” tab in the control area on the right hand side of the screen and scroll down to the last attribute, which should be called “Factions 2” and then click the “Color” checkbox.  How does factions compare with the Newman-Girvan algorithm in terms of predicting the affiliations?  How could you display the Girvan-Newman results, the Factions result, and the Hierarchical Clustering Results ALL at the same time?

 

4)                 Cliques using UCINET and NetDraw with KAPFTS2

a)     If you have not done so before, unpack the KAPTAIL using Data | Unpack.

b)     In UCINET run Network | Subgroups | Cliques on KAPFTS2 with a minimum size of 3.  How many cliques do you get?  How many actors are in this network?  How useful is this? 

c)      Visualize KAPFTS2 in NetDraw.  Does this help us identify clique structures? 

 

d)     What about if we open CliqueOverlap (which is an actor-by-actor matrix in which each cell holds the number of different cliques that this pair of actors is in together that was created when we ran Cliques in UCINET).  Start increasing the filter at the bottom of the “Rels” tab on the control panel on the right side of the screen up from 1 using the “+” button.  Does this indicate there is a significant or minimal overlap between cliques in this network?

 

e)     Now set the filter back down to 0 and open CliqueSets in Netdraw and redraw the picture (lightning bolt).  This is a two-mode network were lines indicate actors (typically red circles with names) belong to a specific clique (typically blue squares with numbers).  What does this picture convey about the structure of the network?  Are there actors who seem embedded in a lot of different cliques? 

 

f)       Run Analysis | Centrality on these data specifying the undirected option.  (Although this is a 2-mode network, NetDraw allows you to run centrality on it.  Now size the nodes by degree centrality.  Who is embedded in the most unique cliques?  And Next?

 

5)                 K-CORES using NetDraw with PV504

a)     Open PV504 in NetDraw.  Because it is very large, NetDraw does not optimize the layout automatically when opening it.  To make the diagram more readable, turn off labels (using the script L button on the icon bar), and then redraw the network.  This may take some time, but let it finish.  You should begin to see some structure in the network as it draws it. 

b)     These are valued data about the number of days individuals worked together on projects.  Let’s increase the filtering to be greater than 3 by clicking on the “+” button toward the bottom of the Rels tab in the control region three times.  Now redraw the network by clicking on the lightning bolt.  Much more structure should be visible.

c)      Now run Analysis | K-Cores.  It will automatically color the nodes according to their k-Core.  Select the Nodes tab, and pull down to the *K-core attribute, and use the “s” button below the values to step through the k-cores from 0 to 10.  What does this tell you about the network?

 

d)     Since all nodes of a higher “coreness” are automatically members of the lower cores, we’d like to step down from the highest coreness, to the lowest, but cumulatively.  To do this, press the “a” button below the values in the control region to select all the check box, then check the “i” button to “inverse” the selection (i.e., uncheck everything that is checked and check everything that is unchecked).  This should leave no boxes checked and a blank screen.  Now check the box next to the highest value (it should be 10) and look at the graph.  Now ALSO check the box next to the second highest value.  Repeat until you have checked all boxes.  Was this more or less useful in evaluating the structure of the network?

Brokerage & Local Position Lab

posted Mar 23, 2014, 10:54 AM by Adam Jonas   [ updated Mar 24, 2014, 8:09 PM ]


For this lab we will use only one network and one attribute dataset:

            KRACK-HIGH-TECH (KHT) & HIGH-TEC-ATTRIBUTES (HTA)

This is a dataset collected by David Krackhardt from managers of a high tech company.  KRACK-HIGH-TECH is a stacked dataset containing three directed, dichotomous matrices which represent ADVICE, FRIENDSHIP, and REPORTS_TO ties among 21 managers within the company.  HIGH-TEC-ATTRIBUTES contains four attributes for each of the 21 actors, including each manager’s age (in years), tenure with company, level in corporate hierarchy, and department.

 

NOTE: I use the abbreviations KHT and HTA to refer to the full filenames (KRACK-HIGH-TEC and HIGH-TEC-ATTRIBUTES) in the lab for brevity.

 

1)      Structural Holes using UCINET and NetDraw with KHT and HTA

a.       If you have not already done so, unpack (Data | Unpack) the KHT dataset to get the three adjacency matrices FRIENDSHIP, REPORTS_TO, ADVICE. 

b.      Run Network | Ego Networks | Structural Holes on the FRIENDSHIP data.  From the output, who appears to have the largest Effective Size?

c.       Visualize the FRIENDSHIP in Netdraw to visualize it. 

d.      When you ran structural holes in UCINET it automatically saved the output in a dataset called “FREINDSHIP-SH” (unless you changed the name).  Load FRIENDSHIP-SH as an attribute file in Netdraw and use the effective size attribute (EffSize) to size the nodes on the graph.  What information does this convey in the graph?

e.       Again using the “Nodes” tab in the control region, select the density attribute (Desnity) and click on the “size” checkbox to resize the nodes based on Density.  What happened?  Why?

2)      Transitivity and Clustering in UCINET using KHT

a.       Run Network | Cohesion | Clustering Coefficient on the FRIENDSHIP data.

b.      By default, this procedure creates a file called ClusteringCoefficients.  Open that file in the UCINET spreadsheet (click on the grid icon to bring up the UCINET spreadsheet).  Select the data and label from the column labeled “Clus Coef” and copy it.  Now, open the StructuralHoles dataset created in step 1 in the data grid.  Increase the number of columns by 1 by changing the value in the box under “Cols:” in the “Dimensions” section of the window, click in the empty label cell (grey cell at top) for the new column and paste the data you just copied.  Save this dataset with a new name (e.g., HolesAndClusters) using File | Save As.

c.       Now run Tools | Similarities on this new dataset to find correlations between the variables.  Which of Burt’s structural holes measures is the most like and the most opposite clustering coefficient?  Why?

d.      Run Network | Cohesion | Transitivity on the KHT (KRACK-HIGH-TEC) stacked dataset (NOT on FRIENDSHIP).  What does this tell you?  Given that transitivity is about “Closure” (or lack of structural holes), which of the three relations has the most and the least closure?


3)      E-I Index with UCINET using KHT & HTA

a.       Run Network | Cohesion | E-I Index on the FRIENDSHIP data, partitioning the data based on department (which is in column 4 of the HIGH-TEC-ATTRIBUTES dataset).  Looking at the individual E-I index statistics, who has the most homophilous ties (more concentrated within the same department), and who has the most heterophilous (most concentrated outside department) ones?

b.      Rerun E-I index using the same partitioning, but instead of using the FRIENDSHIP dataset, use the stacked KHT (KRACK-HIGH-TEC) dataset.  Bearing in mind that you ran this on a stacked dataset, what do you think these results tell you?  How could you find out?

c.       Display (using the “D” icon) the KHT stacked dataset. Rerun the E-I index on the individual dataset that is displayed first from this command and compare the results to the results from step b.  Is this what you thought was happening?

4)      Brokerage with UCINET using KHT & HTA

a.       Run Network | Ego Networks | G&F Brokerage roles on the FRIENDSHIP data, again using the department attribute (Column 4 of HTA) for your partition vector.

b.      Open KRACK-HIGH-TEC in Netdraw and, using the “Rels” tab in the control region, display only the FRIENDSHIP relation.  Compare this visualization with the results from step a.  It may help to color or shape the nodes by department.  Can you find at least one example of each kind of brokerage for Actor 5?  (Remember, direction counts in brokerage, so make sure you have the arrows on and visible.)

c.       Running brokerage automatically created two datasets, one called BROKERAGE which is a two-mode matrix of actors by brokerage roles.  Run Tools | Scaling/Decomposition | Correspondence on the BROKERAGE dataset.  Interpret the picture you get.

d.      Because this dataset is actors by brokerage roles, it (like most output from UCINET routines) can be used as an attribute file.  Load BROKERAGE as an attribute file in NetDraw (making sure the FRIENDSHIP relation is open and displayed first).  Now, size the nodes by the various brokerage roles (Consultant, Representative, etc.).  Does doing this help identify the different kinds of brokerage roles people play in the network?

e.       Rerun the brokerage routine using the REPORTS_TO data instead of the FRIENDSHIP data, still partitioning based on the department attribute.  Thinking about the relationships, what can you tell about the actors based on this output. 

f.       Re-run brokerage on the REPORTS_TO data, but this time use the Level attribute (Column 3) to partition that data.  How does this data compare to the previous output?  Why is it different?



5)      Ego-Net Strength with UCINET and NetDraw using KHT

a.       Run Network | Ego Networks | Egonet composition | Continuous alter attributes specifying the ADVICE dataset you previously unpacked from KHT for the Input Network Dataset.  For the Input Attribute dataset, specify HIGH-TEC-ATTRIBUTES and select the column labeled “Tenure.” 

b.      This procedure gives information about each actors’ egonet with respect to their access to “Tenure”.  (This is a newer routine which reads the column labels from the attribute file so you do not have to look up the column numbers.)  If we say that getting advice from people who have worked for the company longer is more likely to lead to success, which people are most likely to succeed based on these results?

c.       Go back to NetDraw, load KHT and ensure that the ADVICE relation is being displayed.

d.      Now load the dataset just created (by default it was called ADVICE-EgoStrength) as an Attribute file in NetDraw.  Size the nodes based on both the “Sum” and the “Avg” (Average) values calculated.  How do the results differ?  Which do you think is more likely to lead to success?

Exercise 7. Characterizing whole networks

posted Mar 2, 2014, 6:20 PM by Steve Borgatti   [ updated Mar 2, 2014, 6:42 PM ]

For this exercise we will use four datasets:

            CAMPNET:

This is a dichotomous adjacency matrix of 18 participants in a qualitative methods class.  Ties are directed and represent that the ego indicated that the nominated alter was one of the three people with which s/he spent the most time during the seminar.

KAPTAIL:

This is a stacked dataset containing four dichotomous matrices.  There are two adjacency matrices each for social ties (indicating the pair had social interaction) and instrumental ties (indicated the pair had work-related interaction).  The two pairs of matrices represent two different points in time.  The names of the datasets encode the type of tie in the sixth letter, and the time period in the seventh.  Thus, the dataset KAPFTS1 is social ties at time 1 and KAPFTI2 is instrumental ties at time 2, etc.

ZACKAR & ZACHATTR:

ZACKAR is another stacked dataset, containing a dichotomous adjacency matrix, ZACHE, which represents the simple presence or absence of ties between members of a Karate Club, and ZACHC, which contains valued data counting the number of interactions between actors.  ZACHATTR is a rectangular matrix with three columns of attributes for each of the actors from the ZACKAR datasets.


EXERCISE:

1)  Cohesion using UCINET with CAMPNET 

a) Calculate the following measures of cohesion using Network | Cohesion: Density, Geodesic Distances, Maximum Flow, Point Connectivity

b) Compare the point connectivity values and the maximum flow values. (Ignore values on the diagonal.) What is the relationship between them? Why do you think that is? Can you find the edge-independent paths (maximum flow) and node independent paths (point connectivity) between Bill and Pat by visualizing Campnet in NetDraw?

c) The Geodesic Distance procedure used in 1a produces a matrix. Looking at the matrix, what is the diameter of the network? What is the longest distance between any two connected nodes? When is this the diameter and when is it not?

d) Go to Transform | Symmetrize and symmetrize Campnet on the Maximum method. Now run distance again, this time on the symmetrized matrix (called Campnet-Sym by default). No looking at the matrix, and the frequency table above it, what is the diameter of this network?
e) Using your Netdraw visualization, verify a couple entries in the distance, point connectivity, and maximum flow matrices produced.

2) Average Degree & Centralization using KAPTAIL

a) Run Network | Centrality | Degree on KAPTAIL being sure to tell UCINET that the data are symmetric. This will generate results for all four networks (matrices, levels) in the dataset.

b) Compare the results for KAPFTS1 and KAPTFTS2 (the social ties at time 1 and time 2). What happened to average degree? What happened to network centralization? Does this make sense?

c) Compare the results for KAPFTI1 and KAPFTI2 (the instrumental/work ties at time 1 and time 2). What happened to average degree and centralization here? Does this make sense?
d) Why does centralization behave the way it does compared to average degree across the Social and Instrumental ties? (HINT: Look at the maximum and minimum values.)

3) Fragmentation using UCINET and KAPTAIL

a. Using the KAPFTS1 dataset (you may have to unpack KAPTAIL if you have not already done so using Data | Unpack), calculate its fragmentation under Network | Centrality using the default options. This reports both “Fragmentation” and “Distance Weighted Fragmentation.” Why are the numbers different? Which one is more useful for this network? When would you choose to use one or the other?

b. Based on the results from Exercise 2 above, what do you think will happen to each of the fragmentation measures if you run them for KAPFTS2. Run them to check your answers. Were you surprised? By which measure(s)? Why are the results what they are?

4) Core-Periphery using UCINET with KAPTAIL

a.      Run Network | Core/Periphery | Categorical on KAPFTS1 and KAPFTS2.  How do the results differ?  During which time period was there a cleared core/periphery structure to the social ties?  What happened to the core between time 1 and time 2?

b.     Run Network | Core/Periphery | Continuous on KAPFTS1.  Find the line where it recommends how many nodes should be in the core.  Does that match the size of the core found from the Categorical procedure?  How might you determine which one better captures the core/periphery nature of the data?

 

 Exercise written by Rich DeJordy

Exercise on matrices

posted Feb 27, 2014, 1:42 PM by Tejaswi Ajit   [ updated Mar 2, 2014, 12:09 PM by Steve Borgatti ]

[This exercise is for the centrality week]

Please find attached an exercise on adjacency matrices, matrix notation and eigen values and eigen vectors.

Exercise 6: Network Hypotheses

posted Feb 25, 2014, 1:59 PM by Adam Jonas   [ updated Feb 25, 2014, 2:00 PM ]

Hypothesis Testing Lab

 

For this lab we will use four datasets:

            CAMPNET:

This is a dichotomous adjacency matrix of 18 participants in a qualitative methods class.  Ties are directed and represent that the ego indicated that the nominated alter was one of the three people with which s/he spent the most time during the seminar.

ZACKAR & ZACHATTR:

ZACKAR is another stacked dataset, containing a dichotomous adjacency matrix, ZACHE, which represents the simple presence or absence of ties between members of a Karate Club, and ZACHC, which contains valued data counting the number of interactions between actors.  ZACHATTR is a rectangular matrix with three columns of attributes for each of the actors from the ZACKAR datasets.

KRACK-HIGH-TEC & HIGH-TEC-ATTRIBUTES

KRACK-HIGH-TEC is another stacked dataset, containing three dichotomous relations (REPORTS_TO, ADVICE, FRIENDSHIP).  HIGH-TEC-ATTRIBUTES contains several attributes about the nodes in KRACK-HIGH-TEC, including Age, Level (CEO, Manager, Staff), Tenure, and Department.

            WIRING

This is a stacked dataset that includes many different files.   This is a dichotomous adjacency matrix of 14 employees of the bank wiring room of Western Electric used in the famous Hawthorne Studies.  Ties are symmetric and represent participation in games during work breaks.  RDGAM records people playing games together, RDCON records conflict between people, RDPOS is positive interactions, RDCON is negative interactions.

 

1)      Testing dyadic hypothesis

a.       Run Data | Unpack on ZACKAR (if you have not yet), which will create ZACHE and ZACHC.  ZACHE has dichotomous data about the ties and ZACHC has valued data (the strength of ties).

b.      Run Tools | Similarities and use the cross-product measure to compute similarities among the rows of ZACHE.  (The cross product is a very powerful and common matrix operation that, in this case, will count how many friends each pair of actors have in common.)  Call the output CommonFriends.

c.       Go to Tools | Testing Hypotheses | Dyadic (QAP) | QAP Correlation and browse to include both ZACHC and CommonFriends to be correlated and click okay.  What do the results mean?

 

d.      Congratulations, you have just statistically demonstrated the first part of Granovetter’s famous “strength of weak ties” theory, which states that I have stronger ties (ZACHC) with those people with whom I share more friends in common (CommonFriends).

2)      Testing multivariate dyadic hypotheses

a.       Unpack the WIRING dataset if you have not done so yet. 

b.      Go to Tools | Testing Hypotheses | Dyadic (QAP) | QAP Regression | Double Dekker.  Put RDCON (conflict between members about whether the windows should be open or shut) in as the dependent variable.  Put in RDPOS (positive relationships), RDNEG (negative relationships), and RDGAM (playing games together) in as independent variables.  Before running it, what do you think would most significantly predict conflict?  After running it, are your results what you expected?  How would you explain the results?

c.       Record the standardized coefficient and significance for any significant predictor, and run the same procedure two more times (still using the default value of 2000 for the number of permutations) and record the same results.  Now, run the same procedure three more times setting the number of random permutations set to 100000.   Record the same results.  How did the parameter affect the results?  Why? 

3)      Testing monadic hypotheses.

a.       You should have already unpacked the KRACK-HIGH-TEC dataset, but if not, do so now.  You will get three datasets (REPORTS_TO, ADVICE, FRIENDSHIP).  We are going to use the ADVICE dataset.  Run Network | Centrality | Degree on this dataset, using the directed version, telling it NOT to treat the data as symmetric.  By default, it will name the output FreemanDegree

b.      We are particularly interested in who is sought after for advice, which is captured by InDegree centrality.  So, we are going to pull out just that column from the results, but using Data | Extract | Submatrix. Specify FreemanDegree as your input dataset and that we want to “Keep” “ALL” rows.  Then click on the L to the right of the box for “Which Columns” and select the column labeled “InDegree” and call your output ADVISING. This is a measure of how many people said they sought advice from each person.

c.       Display (D) the HIGH-TEC-ATTRIBUTES dataset to determine which columns the AGE and TENURE attributes are in.

d.      Now, it is common wisdom that people look to the “senior” people for advice, but is unclear in an organizational context whether senior is “older” or “longer tenured”.   You will test if either of these is supported by the data.  Run Tools | Testing Hypotheses | Node-Level | Regression specifying ADVISING for your dependent dataset with the appropriate column and HIGH-TEC-ATTRIBUTES and the appropriate columns for your independent dataset (i.e., the columns for Age and Tenure separated by a space), and set the number of permutations to 10000.  Which meaning of “senior” do the data support?

e.       Why did we use the Regression option of Node-Level instead of T-Test or Anova?  When would we use those?

4)      Testing Mixed-Dyadic Monadic hypotheses

a.       Since it is only fitting that we end where we started, we shall use the campnet data for these final exercises.

b.      You will run Tools | Testing Hypotheses | Mixed Dyadic/Nodal | Categorical attributes | Anova Density twice.  For both, specify CAMPNET as the network matrix, and the gender column of the CAMPATTR matrix as the Actor Attribute.  For the first run, choose “Constant Homophily” for your model, and for the second, choose “Variable Homophily”.  Interpret both sets of results.  What do they mean?  Is there homophily?  Which gender tends to be more homophilous?

 

5)      Using QAP for Mixed Monadic/Dyadic Hypotheses testing.

a.       Using Data | Attribute to matrix, create a matrix of exact matches among the actors in Campnet based on gender.

b.      View this new matrix (named CAMPATTR-MAT by default) in Netdraw.  What does the diagram show?

c.       Use Tools | Testing Hypotheses | Dyadic (QAP) | MR-QAP Linear Regression | Double-Dekker MRQAP  to regress the Campnet network on this new matrix of gender similarity, CAMPATTR-MAT.  What do the results show?

d.      Do you prefer this approach of the ANOVA Density Tables?  When might you use each of these separate techniques?  What research question might involve using Moran’s I (or Geary’s C) instead of the ANOVA Density Tables?  In that case, how would you use QAP to test for Autocorrelation?

 

Exercise 5: Visualization

posted Feb 17, 2014, 10:18 AM by Jesse Fagan

  1. Load the Krack-high-tec dataset into NetDraw

  2. Load the Krack-high-tec attributes file into NetDraw, “high-tec-attributes”

  3. Use the lightning bolt to apply a graph layout algorithm (GLA) to the data

  4. Now let us check out the different relationships in this dataset using the Relations tab.

    1. Provide a screenshot or embedded .jpeg (if you want to use the export command) of each relationship visualised separately.

  5. Now let’s work with only the reports to relationship

    1. Size by the length of tenure nodes tab and save a picture.

    2. Now take the same network and color by department and provide a visualization.

    3. Interpret what the network diagram shows and why it might be this may. Also, explain why it was best to color the department variable and size by the tenure variable.

    4. Save the picture you made as a .vna file

    5. Now erase the network you have been working with by clicking the ‘new’ icon (the first on the NetDraw icon bar.

    6. Open the .vna file you just saved. Did you notice anything different about this type of file compared to a regular UCInet network file? Hint: Check the nodes tab…

    7. Now export the picture into your word document by exporting as a metafile (the highest quality file you can export as).

  6. Now open the CITIES dataset using metric multi-dimensional scaling (MDS)

    1. Flip the axis to orientate the picture; this should look rather familiar…

    2. Take a screenshot

    3. Please explain why MDS is different than using a GLA (What the lightning bolt does). When would you want to use metric MDS vs. non-metric MDS?

Exercise 4. Data

posted Feb 6, 2014, 11:11 AM by Steve Borgatti   [ updated Feb 7, 2015, 2:32 PM ]

Artocoinfo
  1. From the maydata.xls file (a standard UCINET file but also available here), grab valued network contained in the artcoinfo tab and import it into ucinet. Call the result artcoinfo. This is a network representing how often each person seeks information from each other person. 
  2. Also from maydata.xls, import the attribute data for artco. 
  3. Dichotomize artcoinfo so that only values 5 and above are considered ties. Call the result artcoinfo5
  4. Visualize artcoinfo5 using netdraw
  5. In matrix algebra compute the indegree (column totals) of each node in artcoinfo5. Transfer to Excel and make a scatterplot with tenure (see maydata.xls) as the X-axis and indegree as the Y-axis. Plot a linear trendline including r-square and regression equation. What do you conclude?
  6. Construct a density table for artcoinfo5 based on the hierarchy attribute. (Hint: use Networks|Cohesion|Density|Density by Groups). What's the pattern? 
Class Data
  1. Import the "prior acquaintance" network from an online survey of a past class. The raw data downloaded from the survey software is contained in an Excel file called results-survey. Here's a short video showing how to clean the data in Excel and transfer to UCINET:

    high resolution: http://pgbhmeyd.livedrive.com/item/c4adc1f88d9e419ea2b8638f80e7d7b1
    low resolution: http://www.screencast.com/t/rERlVnDx

  2. Visualize the network

1-10 of 14

ĉ
Steve Borgatti,
Jan 24, 2016, 1:05 PM