The blockmodeling console app (Win/Linux/Mac)

Current version: Version 1.1 (January 2024) Download page »


Quick-start tutorial

[To top]

This tutorial will demonstrate how a core-periphery and a blockmodeling analysis is done in For this to work, start the in its default interactive mode. In the console/terminal window, you should be greeted by a welcome text, informing you that is running in interactive mode, and with a prompt on the last line (>), followed by a cursor, ready to take written input:

Socnet - Network analysis in C#
Version 1.1 (January 2024)
Carl Nordlund - was supported by NordForsk through the funding to
The Network Dynamics of Ethnic Integration, project number 105147

How to cite specific methods, type in 'citeinfo()'.

Entering interactive mode (type 'quit' to quit, 'help' for help):

Navigating folders and setting working directory

It is here assumed that you have downloaded and installed the ‘example_data’ folder, perhaps as a sub-folder to your 'Documents' folder. We start with checking the current working directory (and don't type in the > below: that is only to indicate that you should type this in at the prompt):

> getwd()

The getwd() function prints out the current working directory, which is the specific location on your file system where will load and save files by default. The exact directory/folder path that ‘getwd’ returns depends on your operating system. When you start Socnet, it is likely that this is where your executable is, which is not a good place to work with your files.

Use the setwd() function to direct to a more suitable place. This function takes one argument - 'dir' - which is either the absolute or relative path to the folder you want to navigate to (where you replace [folder] below to the path of that folder):

> setwd(dir = [folder])

This function has a special option: instead of stating a path to a folder, you could simply state 'user':

> setwd(dir = user)

If you are on a Windows machine and your user account is 'ragnar', this should navigate you to C:\Users\ragnar. If you then happen to have your 'example_data' folder in your standard 'Documents' folder, you should now be able to navigate to your 'example_data' folder with something like this:

> setwd(dir = Documents/example_data)

To check the content of the current folder, use the dir command:

> dir
[...lines omitted...]

Pro-tip: you can navigate to your folder in Windows, copy the filepath (Alt+D, then Ctrl+C), and then paste that into the console as needed

Finding the core-periphery structure in the Baker binary citation data

Load the Baker binary network (journal citation data) into using the loadmatrix() function. We assume here that your working directory is the 'example_data' folder, which contains the 'baker.txt' example file:

> loadmatrix(file = baker.txt)
Stored structure 'baker_actors' (Actorset)
Stored structure 'baker' (Matrix)
Loading data structure: OK will load this file and create two data objects: a Matrix object and an Actorset object. As the file name is 'baker.txt', the Matrix will be named 'baker' and the Actorset is 'baker_actors'. (This command has an optional argument - 'name' with which you can specify the Matrix name).

Use the structures command to see the structures that are currently in memory. structures is a function, but as it doesn't have any arguments, you can skip the parentheses. This will output a simple tab-separated table on the console where you can see the data type, name and size of respective object. Note that the tab columns could be a bit messy.

> structures
DataType	Name	Size
========	====	====
Actorset	baker_actors	20
Matrix	baker	20;20

You can inspect the content of each of these by simply typing in their names at the prompt, i.e. baker or baker_actors.

We first want to do the default Borgatti-Everett core-periphery search (Borgatti and Everett 1999), using the default core/periphery measure where potential intra-core ties are correlated with unity and corresponding intra-peripheral ties are correlated with zero. Although this corresponds to a special case of direct blockmodeling, has a shortcut function for finding core-periphery structures: the coreperi() function. With multiple optional arguments for different kinds of core-periphery structures (consult the Function reference list below that contains all such details), the default setting corresponds to the core-periphery measure that Borgatti and Everett recommend in their article. The only two arguments we need to provide is the specific network we want to analyze, and the search algorithm to use. For the latter, we use ‘localopt’, which is a breadth-first type of search algorithm for finding what is hopefully the partition that yields the highest core-periphery correlation. By default, 'localopt' starts by testing 50 random partitions, picking the one that is the most optimal, continuing searching for a maximum 100 steps or until no longer finding a more optimal partition in that search. This is repeated for a total of 50 times.

> coreperi(network = baker, searchtype = localopt)

Once finished (which should be quite instantaneous), informs about the various search parameters that went into this search and how long time the search took (in milliseconds). It also informs about the name of the optimal BlockModel object and the best goodness-of-fit (which in this case is a weighted correlation, aka 'nordlund'). For the Baker binary data you just tested, this should be 0.8596.

If you now use the structures command, you will see a new BlockModel object. The BlockModel data object is however more of a placeholder for other data objects related to a particular solution. This is by design, to keep the global data space less cluttered: instead, we can extract these (hidden) data objects separately as we want.

The name of the resulting Blockmodel should be ‘bm_baker_cp_0’. The naming reflects that it is a BlockModel object, based on the Matrix object ‘baker’, where the blockimage was a core-periphery structure, and where this was the first (and only) optimal blockmodel that was found (the final zero). If there were two solutions with equally good core-periphery correlations, the ‘coreperi()’ function would have provided two BlockModel objects, the latter instead having a ‘1’ at the end. The coreperi() function has an optional 'outname' argument that can be used to rename the resulting BlockModel object.

To view details about this Blockmodel object, type in the name of this object at the prompt, i.e. > bm_baker_cp_0. This will provide more technical details about the solution. To inspect this solution, it is likely better to use the bmview() function:

> bmview(blockmodel = bm_baker_cp_0)

This generates three sections of output, representing the different internal objects of the BlockModel object. First is a simple visualization of the blockmodel, i.e. separating the actors into a core (1st position) and a periphery (2nd position). An X here indicates a non-zero value, and the core-periphery structure should clearly be seen, with a total of 7 journals (actors) in the core:

|\XXXXXX|X     XX     | 0_cw
|X\ XXXX|XX    XX     | 0_cysr
|X \XXXX|    XX    XX | 0_jswe
|XXX\XXX|  X X    XX  | 0_ssr
|XXXX\XX|XXX X  XX XX | 0_scw
|XXXXX\X|    X        | 0_swra
|XX  X X|\     X      | 1_can
| X  X X| \           | 1_fr
|   XX X|  \          | 1_cswj
|      X|   \         | 1_amh
|  XXXXX|    \ X      | 1_asw
|  X   X|     \    X  | 1_bjsw
|XX    X|X   X \      | 1_pw
|XX  X  |       \     | 1_ccq
|    X X|        \    | 1_jgsw
|   X   |         \   | 1_jsp
|  XXX X|     X    \X | 1_swg
|  X X X|          X\ | 1_swhc
|      X|            \| 1_ijsw

Below the blockmodel visualization is the blockimage that this was matched on. As we here used the default Borgatti-Everett approach that ignores off-diagonal blocks, the blockimage contains a complete (com) block for intra-core ties (C-C), a null block for intra-peripheral ties (P-P) and so-called 'do-not-care' ideal blocks for the P-to-C and C-to-P blocks.

	C	P
C	com	dnc
P	dnc	nul

Finally is the Goodness-of-fit correlation for this partition, which here should be 0.8596. The '(nordlund)' after this measure indicates that the weighted-correlation-coefficient approach from Nordlund (2020) was used: although the Borgatti-Everett core-periphery measure uses unweighted correlations, i.e. where all value-pairs have the same weight, still uses the weighted-correlation approach for simplicity (and as that also allows for more advanced intra-core ideal blocks, such a p-cores).

Goodness-of-fit: 0.8596 (nordlund)

The bmview() function also has an optional argument - 'ideal=yes' - to visualize the for visualizing the ideal matrix. In this case, with the default Borgatti-Everett core-periphery model, this is simply an expanded version of the Blockimage.

How certain are we that the found Blockmodel partition indeed is the optimal one? We can rerun the coreperi() function where we brute force test all possible core-periphery partitions, instead of the local optimization search. Testing a 20-node network with 2 positions, with minimum cluster size (by default) set to 1, this means testing a bit over a million different core-periphery partitions. We test this as follows - and note that we can actually exclude the argument names, as long as we then provide the argument values in the specified order:

> coreperi(baker,exhaustive)
Init and running corr-based core-peri
Initializing direct blockmodeling search...
Network: baker
Method: nordlund
Search heuristic: exhaustive
Blockimage: cp
minclustersize: 1
maxtime: (timeout inactive)
Initialization seems to have gone ok!
Execution time (ms):13384
Updated structure 'bm_baker_cp_0' (BlockModel)
Goodness-of-fit (1st BlockModel): 0.8596 (nordlund)

This will take a bit longer (5-15 seconds depending on your CPU) and will result in the same optimal Blockmodel solution as the one previously found. Do note that the resulting BlockModel object will overwrite the previous one you might have had in memory: thus the 'Updated' rather than 'Stored' in the response you got above.

Sometimes it might take a bit longer though - and sometimes it might take really long. The coreperi() function in has an in-built timer function that will abort a search after a certain time limit. This is turned off by default, but can be set using the optional 'maxtime' argument, where the maximum allowed execution time is stated in milliseconds:

> coreperi(network=baker, searchtype=exhaustive, maxtime=3000)

Unless you have a very fast computer, the above will timeout after 3 seconds, providing a lot of setup info and details about where it stopped and how many partitions it managed to test. Once stopped, it is not possible to resume the search.

In the implementation of the Borgatti-Everett core-periphery heuristic in Ucinet, it is also possible to specify the 'densities' of the inter-categorical ties. This is also possible to do in the coreperi() function, using either the 'ptoc' and 'ctop' arguments (for each of the two inter-categorical blocks) or the 'intercat' argument (which specifies the same ideal block for both inter-categorical blocks. What we provide here are ideal block types, of which has many of - see the Ideal block types below. All these arguments are optional, and note that the 'intercat' argument is prioritized before the 'ptoc' and 'ctop' arguments.

It is also possible to specify the power-relational core-periphery features of peripheral dependency and core dominance as specified in Nordlund (2018). Using the optional 'powerrelational' argument, you can specify an ideal core-periphery structures with peripheral dependency (dep), core dominance (dom), or both (depdom). See the coreperi() specifications.

To mimic how inter-categorical densities are implemented in Ucinet, i.e. where each possible inter-categorical tie is correlated with the specified density, we use the denuci(d) type of ideal block, where we also have to provide the specifid density. To search for the optimal core-periphery partition in the Baker network, with a Ucinet-style density of 0.3846, use the following:

> coreperi(network=baker, searchtype=exhaustive, intercat = denuci(0.3846))

As there are additional blocks to check, this will take a bit longer. The resulting optimal Blockmodel solution has the same partition as before, but now with a markedly lower core-periphery correlation (0.5947). This is identical to the solution and correlation obtained in the Ucinet implementation. Thus, despite providing the accurate inter-categorical density for this solution, the resulting core-periphery measure is thus markedly lower than what is obtained when simply excluding inter-categorical ties altogether. This is due to the way densities are checked for here: rather than actually checking whether the inter-categorical blocks have a specified density or not, each of the binary ties in these potential blocks are instead correlated with the provided density, which severely lowers the overall core-periphery correlation.

Repeating the core-periphery search but this time using the 'corrected' correlation-based density blocks as suggested in Nordlund (2023; forthcoming), we specify these inter-categorical to instead use the fixed-density ideal block den(d), again using the stipulated density of 0.3846. Contrary to the denuci(d) ideal block, the den(d) ideal block will check the density of a potential inter-categorical block by correlating the largest ni values in this block with unity and remaining values with zero, where ni corresponds to the number of binary ties in the specific block that has to be there to get as close as possible to the stipulated density d. We thus replace the specified block in the previous statement:

> coreperi(network=baker, searchtype=exhaustive, intercat = den(0.3846))

Yielding an identical solution as before, this time the core-periphery correlation is 0.9383, which is more indicative of a proper core-periphery structure than 0.5947.

Extracting and storing results

Given a BlockModel solution like the one obtained above, i.e. bm_baker_cp_0, we use the bmextract() function to extract the results of interest from the BlockModel object. To extract the optimal blockmodel as a sorted Matrix object, we type in the following and get the following output back:

> bmextract(blockmodel = bm_baker_cp_0, type = matrix)
Stored structure 'bm_baker_cp_0_actors' (Actorset)
Stored structure 'bm_baker_cp_0_matrix' (Matrix)
Stored structure 'bm_baker_cp_0_idealmatrix' (Matrix)

This will thus store a new Matrix in Socnet, called 'bm_baker_cp_0_matrix' and the Actorset belonging to this Matrix object. However, note that we also got a Matrix object that ended with '_idealmatrix'! This is the ideal matrix of your blockmodel solution, indicating the specific ties that were categorized as prominent (1) and non-prominent (0) when matching to the blockimage.

The auto-generated name for the Matrix (and associated Actorset) that we obtained from the bmextract() might be a bit long, so for simplicity, we might rename these to something shorter, using the rename() function:

> rename(name = bm_baker_cp_0_matrix, newname = bm_baker)
Renamed structure 'bm_baker_cp_0_matrix' (Matrix) to 'bm_baker'
> rename(name = bm_baker_cp_0_actors, newname = bm_baker_actors)
Renamed structure 'bm_baker_cp_0_actors' (Actorset) to 'bm_baker_actors'
> rename(name = bm_baker_cp_0_idealmatrix, newname = bm_baker_ideal)
Renamed structure 'bm_baker_cp_0_idealmatrix' (Matrix) to 'bm_baker_ideal'

But we could also have used the optional 'outname' argument in the bmextract() and decide on the names of these objects immediately when we extract them.

We can also extract the specific blockimage for this particular solution. Similar to the Matrix and Actorset object types, a BlockImage is a type of object in Socnet:

> bmextract(blockmodel= bm_baker_cp_0, type=blockimage)
Stored structure 'cp' (BlockImage)

As this blockimage derives from the core-periphery function, it is simply called 'cp' by default. It could thus be a good idea to rename this as well, so we know which solution it relates to:

> rename(cp, cp_baker)
Renamed structure 'cp' (BlockImage) to 'cp_baker'

Note that we didn't have to specify 'name' and 'newname' here when providing the arguments: both arguments are compulsory for the rename() function and specified in this order by default, so given two values, assumes you have given them in the order as the arguments are specified in the Function reference list.

Finally, we can save our extracted data objects as files. To save the Matrix object for the optimal blockmodel solution as a file with the name 'bm_baker.txt' in the current working directory, we use the save() function:

> save(name=bm_baker, file=bm_baker.txt)
Matrix 'bm_baker' saved: bm_baker.txt
> save(name=bm_baker_ideal, file=bm_baker_ideal.txt)
Matrix 'bm_baker_ideal' saved: bm_baker_ideal.txt

Similarly, to save the corresponding blockimage for this solution as a file called 'blockimage_baker_cp_txt':

> save(name=cp_baker, file=blockimage_baker_cp.txt)
Blockimage 'cp_baker' saved: blockimage_baker_cp.txt

Note that when saving results, there is no need to specify what type of data object you are saving. Also: do note that we saved these files in the 'example_data' folder, which might not be ideal.

Finally, open the result files in a text editor, or load it into R/Python/Excel for further processing/analysis!

Direct free-search blockmodeling of Hlebec notesharing data

In this second example, we will replicate the analysis and findings from Nordlund (2020) on the Hlebec notesharing data (see Žiberna 2007). Make sure to cite the original reference if using this data in a publication:

  • Hlebec, V., 1996. Metodoloske znacilnosti anketnega zbiranja podatkov v analizi omrezji: Magistersko delo. FDV, Ljubljana. [link]

We start with deleting all data objects that might currently be in the memory. This is done with the deleteall() function, which should be used with care:

> deleteall()
Deleted all structures

We load the hlebec example data from the 'example_data' folder:

> loadmatrix(hlebec.txt)

We want to do regular blockmodeling here, using the weighted-correlation-based heuristic proposed in Nordlund (2020), so we first need to specify the blockimages we want to use. To create a 3-positional blockimage where each of the 9 block positions have two potential ideal blocks - a null block or a regular block - we use the blockimage() function, which will return a BlockImage object. We assign this to a new object named 'bi3':

> bi3 = blockimage(size=3, pattern = nul;reg)
Stored structure 'bi3' (BlockImage)

To view this, simply type in the name of this data object:

> bi3
Name:bi3	Datatype:BlockImage	Size:3x3
	P0	P1	P2
P0	[nul;reg]	[nul;reg]	[nul;reg]
P1	[nul;reg]	[nul;reg]	[nul;reg]
P2	[nul;reg]	[nul;reg]	[nul;reg]
Multiblocked: True

Each of the 9 block positions now have two ideal block types each: a null block, and a regular block. As such, this is a so-called multiblocked block image, i.e. where it is unknown which particular configuration of ideal blocks that is the most optimal. In the end, we will indeed know this, from the (single-blocked) BlockImage object that can later be extracted from the BlockModel objects that hold solutions.

Whereas the coreperi() function combined both initialization and running in a singular function, this is normally done in two steps. First, the direct blockmodeling search is initialized. Then, the search is started.

To initialize a direct blockmodeling search, we use the bminit() function. We will work with the 'hlebec' network, using the 'bi3' blockimage as a template, and search using local optimization (with actor switching turned on). The method we will use is the weighted-correlation-based goodness-of-fit measure from Nordlund (2020), and the ideal block implementations from this approach (see Nordlund 2020).

> bminit(network = hlebec, blockimage = bi3, searchtype = localopt, method = nordlund, doswitching=yes)

The bminit() function generates quite a lot of output (and if there is an error, you will be informed):

Initializing direct blockmodeling search...
Network: hlebec
Method: nordlund
nbrrestarts: 50
maxiterations: 100
nbrrandomstart: 50
doswitching: yes
Search heuristic: localopt
Generating from multi-blocked blockimage: bi3 (nbr varieties: 118)
minclustersize: 1
maxtime: (timeout inactive)
Initialization seems to have gone ok!

As 'localopt' is used as the search type, it is also possible to set arguments related to the search. If not set, default values are provided, which are shown here during initialization.

Due to the specific nature of correlation-based goodness-of-fit measures, using a multiblocked blockimage works different than it would for a more traditional penalty-based (Hamming-style) goodness-of-fit measure. Instead of searching for an optimal configuration of ideal blocks in the 3x3 blockimage, each potential blockimage combination should be examined separately. Thus, the first thing in the initialization is to create all non-isomorphic, non-trivial singleblocked varieties of the given multiblocked blockimage. Given these, it subsequently filters out all blockimages of this size that have at least two positions that are structurally equivalent with each other in terms of ideal block types. In this case, with a 3-positional block image and two types of ideal blocks, a total of 118 'singleblocked' blockimages are generated and stored internally, with no isomorphism among each of these and with no structurally equivalent block image positions. When searching for an optimal partition, each of these 118 blockimages will be analyzed separately, each using local optimization, picking the one individual blockimage where the maximum correlation was found.

Once successfully initialized, we can now start the search:

> bmstart

After quite some time (1-3 minutes), having done 50 reruns of the local optimization algorithm, on each of the 118 blockimages, should have found an optimal Blockmodel solution. This should be stored in memory with the name ‘bm_hlebec_bi3_15_0’ - but the 15 could be a different number! Note that the ‘15’ here indicates that the optimal blockimage was the 15th of the 118 tested blockimages. We can inspect these results by:

> bmview(bm_hlebec_bi3_15_0)

Whereas 'localopt' is a breadth-first search algorithm, it takes a bit more time. Furthermore, we activated the 'doswitching' algorithm above, which also takes a lot more time. Let us try the 'ljubljana' algorithm instead, which is a depth-first search that still does some breadth-searching at each level before moving on. This one only does moves, no switches, which also should decrease execution time. We initialize it again with the following:

> bminit(network = hlebec, blockimage = bi3, searchtype = ljubljana, method = nordlund)

And we start a new search:

> bmstart

If everything went well, you should find the same solution as previously, but now within just a few seconds. Check that the found solution indeed has a goodness-of-fit of 0.8813; otherwise, rerun it.

For the sake of this tutorial, let's rename this and remove the number:

> rename(name = bm_hlebec_bi3_15_0, newname = bm_hlebec_bi3)

The obtained solution is identical to the one in Nordlund (2020, p. 138, Figure 9B), but do note that the ordering of positions could potentially be different here.

To extract and save this optimal blockmodel, we first use bmextract(), here with the 'outname' parameter to rename what we are extracting:

> bmextract(bm_hlebec_bi3, matrix, outname=bm_hlebec_bi3_matrix)
Stored structure 'actors_bm_hlebec_bi3_matrix' (Actorset)
Stored structure 'bm_hlebec_bi3_matrix' (Matrix)
Stored structure 'bm_hlebec_bi3_matrix_ideal' (Matrix)
> save(name = bm_hlebec_bi3_matrix, file = bm_hlebec_3pos.txt)
Matrix 'bm_hlebec_bi3_matrix' saved: bm_hlebec_3pos.txt

To extract the specific blockimage for this particular solution, we use the following:

> bmextract(bm_hlebec_bi3, blockimage)
Stored structure 'bi3_15' (BlockImage)

Again: note that the '15 could be different. Note that this blockimage is not the multiblocked BlockImage we gave to the search algorithm! This one we extract from this solution is the specific blockimage for this particular solution. You can easily see this by inspecting the 'bi3' and the 'bi3_15' BlockImage objects:

> bi3
Name:bi3        Datatype:BlockImage     Size:3x3
        P0      P1      P2
P0      [nul;reg]       [nul;reg]       [nul;reg]
P1      [nul;reg]       [nul;reg]       [nul;reg]
P2      [nul;reg]       [nul;reg]       [nul;reg]
Multiblocked: True
> bi3_15
Name:bi3_15     Datatype:BlockImage     Size:3x3
        P0      P1      P2
P0      [reg]   [nul]   [reg]
P1      [nul]   [nul]   [reg]
P2      [nul]   [nul]   [reg]
Multiblocked: False

Use save() to save the 'bi3_15' blockimage. Although you should be able to see the 'nul' and 'reg' blocks in the 'bm_hlebec_3pos.txt' file you saved earlier, it is also good to have the blockimage corresponding to this particular solution.

Additional considerations

Traditional direct blockmodeling can be time-consuming, especially if the network is getting relatively large and if the blockimage is not pre-specified apart from the type of blocks that cold be included. This is particularly the case for the weighted-correlation-based type of direct blockmodeling as suggested in Nordlund (2020), where each possible permutation of a blockimage should be searched individually. The local optimization search implemented in can be adjusted in various ways to hopefully speed up the execution time. Consult the Function reference list below, particular the various optional argumensts for the bminit() function. A couple of suggestions:

  • The 'localopt' search algorithm works by first picking a random partition and checking its goodness-of-fit with the specific blockimage. It then searches for so-called neighboring partitions, i.e. alternatives to the first partition when an actor is moved to a different position (move), or when two actors swap place. By default, the local optimization algorithm only moves individual actors between positions, as this takes less time. If the search goes fairly quick, you should instead have 'doswitching=yes'. This will take more time, but could in certain cases be better at finding the optimal solutions.
  • The 'ljubljana' search algorithm seems to perform better, especially for larger networks. For this algorithm, it is recommended to have a reasonably high 'minnbrbetter' value, and also try many restarts. Contrary to 'localopt', this algorithm has more random elements in how it traverses and searches neighboring partitions, so here it is recommended to have a lot of restarts. Default is thus 50.
  • In general: experiment with different settings for the bminit(). In particular, test with reducing the number of restarts, but preferably not less than 3 restarts. During experimentation, it might be good to add the 'maxtime' argument, to make sure that it at least exits if it gets stuck.
  • At the beginning of each run, the local optimization starts by generating 50 random partition, calculating the goodness-of-fit of these. It then picks the one that haas the highest goodness-of-fit (i.e. correlation) and continues with the local optimization search from that partition. The larger the networks and the more positions in the blockimages, the higher this should be.
  • Many of these methods are taken from several different sources. Use citeinfo() to get references to these sources.
[To top]

Function reference list

[To top] is used by providing it with different commands, for loading/importing, creating, analyzing, and storing/saving results. These commands are Socnet-specific, i.e. it uses its own scripting language. These commands can either be provided on the command line (interactive mode) or as a separate script file that contain these commands. These script files can then be loaded and executed in by using the command loadscript().


Returns the current working directory. This is where will load and save files by default.

setwd(file = [FILEPATH])

Set the working directory to [FILEPATH]. The file path can either be absolute or relative. Also possible to use '..' to move up to the parent folder, e.g. setwd(".."). If specifying 'dir=user', it will navigate to your user folder/directory on your Win/Linux/Mac.


Returns a list of the folders and files that are in the current working directory. Folders are displayed first, surrounded by forward-slashes, such as /example_files/.

load(file=[FILEPATH], type=[TYPE(matrix|table|blockimage|partition)], *name=[NAME], *sep=[SEP])

Load the file [FILEPATH] as a data object of type TYPE (can be either 'matrix', 'table', 'blockimage' or 'partition'). The stored data object is named based on the file name without extension from [FILEPATH], but the optional name argument can be used to name this object. Loading a data object often implies the creation of more than one data object, such as Actorset relating to these objects. These are created and stored automatically. By default, the tab ('\t') character is used to separate columns in the files, but this can be specified with the optional 'sep' argument.

loadmatrix(file=[FILEPATH], *name=[NAME], *sep=[SEP])

Shortcut for loading the file [FILEPATH] as a Matrix object. The Matrix will be named [NAME] if the optional name argument is provided. If a suitable Actorset already exists in Socnet, that will be used for the Matrix object. If not, a new Actorset will be created and stored in The 'sep' argument is identical to that of the 'load()' function.

loadedgelist(file=[FILEPATH], col1=[COL1], col2=[COL2], *symmetric=[yes|no(default)], *actorset=[ACTORSET], *colval=[COLVAL], *headers=[yes(default)|no], *sep=[SEP(default: \t)])

Imports an edgelist from file [FILEPATH] and stores it as a Matrix object. 'col1' and 'col2' represents the column indices (starting from 1) for the columns containing the actor labels for edges.

Optional arguments: if 'symmetric' is set to 'yes' (or just 'y'), all dyads are treated as symmetric. By default, ties are directional, i.e. where 'col1' represents 'from' and 'col2' represents 'to'. If an existing 'actorset' is specified, this Actorset will be used and all actor labels in the two columns will be matched with the Actors existing in this Actorset. If an Actorset is specified and non-matching actor labels are found in COL1 and COL2, an error message will be given, but without aborting the import.

If the imported edges have values, the index of the column containing edge values is specified with the 'colval' argument. It is assumed that the first line in the imported text file is a column header line that should be skipped. If there is no header, i.e. if data starts on the first line, set the 'headers' argument to 'no' (default:yes). By default, it is assumed that columns are separated with the tab character (i.e. \t), but the character separating columns can be specified with the 'sep' argument.

loadtable(file=[FILEPATH], *name=[NAME], *sep=[SEP])

Shortcut for loading the file [FILEPATH] as a Table object. The Table will be named [NAME] if the optional name> argument is provided. If suitable Actorsets already exists in Socnet, these will be used for the Table object. If not, one or two new Actorset will be created and stored in The 'sep' argument is identical to that of the 'load()' function.

loadblockimage(file=[FILEPATH], *name=[NAME], *sep=[SEP])

Shortcut for loading the file [FILEPATH] as a BlockImage object. The BlockImage will be named [NAME] if the optional name> argument is provided. The 'sep' argument is identical to that of the 'load()' function.

loadpartition(file=[FILEPATH], *name=[NAME], *sep=[SEP])

Shortcut for loading the file [FILEPATH] as a Partition object. The Partition will be named [NAME] if the optional name> argument is provided. If a suitable Actorset already exists in that will be used for the Partition object. If not, a new Actorset will be created and stored in The 'sep' argument is identical to that of the 'load()' function.

save(name=[NAME], file=[FILEPATH])

Saves the data object [NAME] as the file [FILEPATH]. [FILEPATH] can either be a file name with extension, such as bm_baker.txt, a relative file path (e.g. data/bm_baker.txt) or an absolute file path (e.g. c:\data\bm_baker.txt). All saved files use the tab character ('\t') to separate columns.

saveblockmodel(name=[NAME], file=[FILEPATH])

This function is explicitly for saving BlockModel objects as JSON-coded files, explicitly targeting R. The saved file will be a JSON-encoded R list, which contains the sorted Blockmodel matrix, the partition, the block image, the goodness-of-fit- and the type of goodness-to-fit. The only purpose of this is to bring the results into R for further analysis.

To load such a saved JSON file into R, you should use the 'jsonlite' library. Load the JSON file into a string, and use 'mylist <- unserializeJSON()' to get the Blockmodel into R as a list object.


Load file script [FILEPATH] and execute all lines as commands.

These scripts should thus contain functions. Lines where the first character is a hashtag (#) will be ignored: use these to comment your script or turning on/off commands.


Rename data structure named [OLDNAME] to [NEWNAME]. Will produce an error if there is already a data structure with [NEWNAME].


Deletes data object named [NAME]. Will produce an error if this is an Actorset that is used by other structures.


Deletes all data objects in Socnet!


Displays a list of all data objects currently in memory. If type is specified and is one of the data types, only display data objects of the type specified.


Displays the data object named [NAME]. If just providing a name of an existing data object to Socnet, this function will be run for that object.

blockimage(size=[SIZE], *pattern=[PATTERN], *content=[CONTENT])

Returns a Blockimage data object of size [SIZE], where SIZE is an integer greater than 1. If used as an assigner, i.e. preceded by the name of a new data object, the BlockImage object will be stored in under that name. If used without assigner, the blockimage() function will output its content on the console but not be stored.

Two optional arguments specify the content of this blockimage: use either of these, not both. Use pattern to create multiple blocks in each cell, i.e. a multiblocked blockimage. Separate block types by semicolon. For instance, pattern=com;nul will fill all blockimage cells with two ideal blocks that will be tested: the complete (com) and the null (nul) blocks. Use content to provide specific content for each cell, which can be multiblocked. Separate each cell with the '|' sign (strikethrough) and separate multiple block types per cell using semicolon. The content will be filled in the cells from the top-left, continuing to the right.

For instance, for a blockimage of size 2, content=com|reg;nul|reg|nul will produce a 2x2 blockimage with a complete block in the top-left position, a regular and a null block in the top-right position, a regular block in the bottom-left position,and a null block in the bottom-right position.

Consult the Ideal block type section below for information about the ideal blocks that have been implemented so far.

partition(actorset=[NAME], nbrclusters=[INT], *partarray=[PARTARRAY])

Returns a Partition data object using the existing Actorset [NAME], prepared for [INT] number of clusters/positions. If used as an assigner, the Partition object will be stored in Socnet: otherwise it will just be displayed on the terminal.

By default, all actors are placed in the first clusters, but the optional partarray allows for allocating each actor to a specific cluster. This is done by providing a semi-colon separated list of integers (PARTARRAY): the location of each integer corresponds to the specific Actor that has that internal index, and the value of the integer corresponds to the cluster that the Actor is to be placed in. The cluster indices start with 0, so for a partition with 3 positions/clusters, a cluster index can either be 0, 1, or 2.

The creating of partitions is typically used for testing hypothetical blockmodels, i.e to test a specific hypothetical partition of the actors into a set of positions.

set(name=[NAME], row=[int], col=[int], value=[string])

This function is used to change/set values of individual cells of an object [NAME], e.g. adding or removing binary ties in a Matrix object, or changing the content of one block in a block image. Provide the row and col(umn) indices for the cell to change, and the value that should be in that cell. Note that value is a string as this can also be used to change specific blockimage cells to other block types. If set() is used on a Matrix, the value will be parsed as if it was a numerical value.

bminit(network=[NAME], blockimage=[BI], searchtype=[exhaustive|localopt|ljubljana], method=[hamming|nordlund], *minclustersize=[INT], *nbrrestarts=[INT], *maxiterations=[INT], *maxtime=[INT], *nbrrandomstart=[INT], *doswitching=[yes|no(default)], *minnbrbetter=[INT])

Prepares/initializes a direct blockmodeling search (but has to be started with the subsequent command bmstart(). The 'network' and 'blockimage' arguments provide the names of the Matrix and BlockImage objects that are to be used. The 'searchtype' can be either of 'exhaustive', 'localopt' or 'ljubljana', for doing an exhaustive search testing all partitions, or using one of the two search algorithms to search for an optimal partition. The 'localopt' is a breadth-first, and the 'ljubljana' is an adjusted depth-first search. The 'method' argument can either be 'hamming' (for standard binary blockmodeling) or 'nordlund' (for the direct weighted-correlation approach of Nordlund 2020).

There are several optional arguments here. By default, the minimum cluster size is 1, but this can be increased with the 'minclustersize' argument. For local optimization searches, the 'nbrrestarts' sets number of restarts per blockimage (default is 50), 'maxiterations' is the number of steps that can be taken per run (default 100) and 'nbrrandomstart' is the number of random partitions that should be tested before picking the one to start each run with (default is 50).

The local optimization algorithm has two actions: moving an individual actor from one position to another, and switching two actors in different positions with each other. In most cases, the moving operation should be able by itself to find the optimal partition, but sometimes the 'path' to the optimal solution is only found if two actors in different clusters are swapped with each other. By default, the local optimization algorithm only move individual actors between clusters as it searches for an optimal partition, but it is also possible to activate the switching operation. To do this, set the 'doswitching' argument to 'yes'.

The 'ljubljana' algorithm as an option 'minnbrbetter', by default set to 5: when searching neighboring partitions, the algorithm will find at least 5 incrementally better neighboring partitions before moving to the best of these and continuing the search from there. This can be increased.

The argument 'maxtime' is used to set the maximum amount of time (in milliseconds) that the search should continue. If not specified, there is no maximum time and the search will continue until finished (which could be a very very long time). For instance, by setting this to 60000, the search algorithm will halt if it is still searching after one minute, providing details on how far it progressed.


If a direct blockmodeling search has been successfully initialized (using bminit(); see above), this command is used to start the search algorithm. It will continue until finished, or until the timeout has been reached (if that option is set by the previous bminit() function). Once finished, one or more BlockModel objects will be created and stored in the memory. The BlockModel objects are by default automatically named, but that can also be specified with the optional 'outname' argument.

bmtest(network=[NAME], blockimage=[BI], partition=[PART], method=[nordlund|hamming])

This is used to test a hypothetical blockmodel, i.e. where both the Matrix object [NAME], the blockimage [BI], the partition [PART] and the 'method' (either 'nordlund' or 'hamming') is provided. Note: this function returns a BlockModel object. If used with an assigner, the BlockModel object is stored in Socnet: otherwise, the details of the BlockModel object will just be displayed on the terminal.

For 'hamming' method, the provided blockimage can be multiblocked. For 'nordlund', the provided blockimage must be singleblocked.

bmview(blockmodel=[NAME], *ideal=[yes|no(default)])

View the content of the specified Blockmodel object. To also view the ideal matrix, use the optional 'ideal=y' argument. Note that this only views what the Blockmodel object contains, i.e. its sorted Matrix, a Partition and a Blockimage object, but it doesn't make these internal objects available in memory. Use the bmextract() function for that.

bmextract(blockmodel=[NAME], type=[matrix|blockimage|partition], *outname=[OUTNAME])

A Blockmodel object can be seen as a container for four internal objects relating to a particular Blockmodel solution: a Matrix representing the sorted blockmodel, a second Matrix representing the ideal binary BlockModel, a Partition representing how the Actors are separated into the various positions, and a Blockimage object, representing the ideal blockimage that the solution found. This function is used to extract these results as new individual objects.

The argument 'type' is used to specify which of these to extract. The value 'matrix' returns a sorted version of the Matrix object that was used in the blockmodel analysis. This Matrix object has a new, solution-specific Actorset that is based on the Actorset of the original Matrix object that was analyzed, but here sorted according to the found solution and with a prefix added to identify the particular position the actor belongs to. Choosing this type will also produce and return a Matrix representing the ideal binary blockmodel for this solution. The ideal matrix is useful for inspecting blocks such as p-cores, density, and various regular blocks.

The value 'blockimage' returns the Blockimage object for the particular solution. If the BlockImage object provided in bminit() was multiblcoked, the extracted BlockImage will be a new, single-blocked Blockimage that contains the specific blocks that were found to be ideal. If the provided Blockimage was singleblocked, e.g. like for the core-periphery function coreperi(), the returned Blockimage will be the same as the one provided.

If the value 'partition' is provided, this will extract the specific partition of the original Actorset, i.e. which was already used by the original Matrix object provided by the bminit() function. The Actorset connected to this Partition is thus not the same Actorset connected to the 'matrix' that can be extracted, the latter which is a sorted, prefixed version of the former.

The extracted objects will be automatically named, but it is also possible to specify the name of the extracted object, using the optional 'outname' argument.


This function returns the internal log of Socnet's blockmodeling functionality. This is mainly used for bug checking.

coreperi(network=[NAME], searchtype=[exhaustive|localopt|ljubljana], powerrelational=[dep|dom|depdom], *core=[COREBLOCK], *intercat=[INTERCATBLOCK], *ptoc=[PTOCBLOCK], *ctop=[CTOPBLOCK], *minclustersize=[INT], *nbrrestarts=[INT], *maxiterations=[INT], *doswitching=[yes|no(default)], *minnbrbetter=[INT], *maxtime=[INT], *nbrrandomstart=[INT])

Initializes and executes a correlation-based core-periphery search, where a temporary internal core-periphery BlockImage is created according to specifications. The 'network' specifies the name of the Matrix oboject, and 'searchtype' can be either 'exhaustive', 'localopt' or 'ljubljana'.

Default settings correspond to the default Borgatti-Everett core-periphery appraoch, i.e. where inter-categorical blocks are excluded in the correlation. This corresponds to having so-called dnc (do-not-care) blocks on the two off-diagonal blocks, and with intra-core ties modeled as a com (complete) block and intra-peripheral ties modeled as a nul (null/empty) block.

To set the two inter-categorical (off-diagonal) positions to something else, use the 'intercat' argument (for setting both inter-categorical blocks), or specify these blocks separately with 'ctop' (core-to-periphery ties) and 'ptoc' (periphery-to-core ties). Although the values for these arguments can be the name of any of the ideal block types available in Socnet, typical inter-categorical ideal blocks here are denuci(d) (for the Ucinet-style exact-'density' block), den(d) (for the exact-density block), and denmin(d) (for the minimum-density block), where d is the specific density that should be matched on (a number between 0 and 1, such as 0.25).

It is also possible to set the core-periphery model to the power-relational type proposed in Nordlund (2018), by using the 'powerrelational' argument. Setting this to 'dep' will prepare inter-categorical blocks looking for peripheral dependency, 'dom' will look for core dominance, and 'depdom' will look for both peripheral dependency and core dominance. If this argument is present, it will overrule any other arguments relating to inter-categorical ties.

The intra-core block is by default a complete block, but this can be set with the core argument. Apart form the default com ideal block type, another possibility is the pco(p) ideal block, i.e. a proportional version of the k-core block type, where p is the proportion of intra-core ties that each actor in the core must have with other in the core (where the number of ties is rounded upwards).

Additional arguments for the 'coreperi' function are the same as for the bminit() function: consult the reference for that.

dichotomize(name=[NAME], threshold=[FLOAT], condition=[ge|gt|le|lt|eq|ne], *truevalue=[FLOAT|keep], *falsevalue=[FLOAT|keep])

Given an existing Matrix, Table or Vector object NAME, this function returns a dichotomized version of this object. Each value in the provided data object is checked with a 'condition' and a 'threshold' value. Conditions can be either of 'ge' (>=, i.e. greater than or equal to), 'gt' (>, i.e. greater than), 'le' (<=, i.e. less than or equal to), 'lt' (>, i.e. less than), 'eq' (i.e. equal to), or 'ne' (i.e. not equal to). Each value is thus checked if they fulfil the condition with respect to the provided 'threshold' value.

If the condition is true, the corresponding value in the new data object will be 1. If the condition is false, the corresponding value in the new data object will be 0. The resulting values can however be changed with the optional 'truevalue' and 'falsevalue' arguments. These can either be set to different constant values, but it is also possible to state 'keep' here, meaning that the original value should be kept. For instance, to set all values below 10 to zero, but to keep the original value for those that are equal to or greater to 10, use the following arguments (where the falsevalue is already zero by default): condition=ge, threshold=10,truevalue=keep

The function returns a new data object: if used with an assigner, a new object with this name will be stored in Otherwise, the dichotomized object will just be displayed on the console.

symmetrize(name=[NAME], method=[min|max|minnonzero|average|sum|difference|ut|lt])

Symmetrizes a Matrix object using the specified 'method' to compare each possible tie with its reciprocal possible tie. Returns a new Matrix object: if used with an assigner, this new Matrix object is stored in Socnet; otherwise, the symmetrized version of the provided Matrix will just be displayed. The original Matrix object will not be modified.

Possible methods are: 'max' (the maximum of the two values), 'min' (the minimum of the two values), 'minnonzero' (the minimum of the two values but excluding non-zero values; but returns zero if both cells are zero), 'average' (average of the two values, including zero values), 'sum' (sum of the two values), 'difference' (the absolute difference between the two values, always positive), ' ut' (uses the value in the upper triangle, i.e. the top-right part of the matrix), 'lt' (uses the value in the lower triangle, i.e. the bottom-left part of the matrix).

rescale(name=[NAME], *min=[FLOAT], *max=[FLOAT], *incldiag=[yes|no(default)])

Rescales all tie values in a Matrix object, taking the current minimum and maximum value in the Matrix and doing a linear rescaling of these values. Returns a new Matrix object: if used with an assigner, this new Matrix object is stored in Socnet; otherwise, the rescaled version of the provided Matrix will just be displayed. The original Matrix will not be modified.

By default, the rescaled Matrix will be in the range 0 to 1, but this can be set with the 'min' and 'max' arguments. If the min value not smaller than the max value, an error will be given. The optional 'incldiag' specifies whether diagonal values in the matrix should be included in determining the transformation and whether these should also be affected. By default, they are excluded.


This function provides journal references (DOIs) to where individual blockmodeling approaches are presented. When using Socnet, I would be happy if you could refer to the software client itself, but also remember to cite the particular sources where respective method and approach were introduced.

[To top]

Ideal block types

[To top] allows for all the common ideal blocks that are used in blockmodeling, covering both structural and regular equivalence blockmodeling, along with ideal blocks from generalized blockmodeling. it also covers more specialized ideal block types, specifically those pertaining to various core-periphery models.

The various ideal blocks in are used to populate BlockImage objects that are created using the blockimage() function. They are also used to specify non-default core-periphery properties in the coreperi() function. When specifying an ideal block for these functions, use the abbreviated names that you find in the column Usage in the table below.

Most blocks are implemented both for conventional Hamming-penalty-based blockmodeling and the weighted-correlation-based blockmodeling. The Ucinet-style density blocks can however only be implemented for correlation-based goodness-of-fit functions. More ideal blocks will be implemented in future versions of Socnet, such as a Hamming-based version of the p-core, dual versions of k-core and k-plex ideal blocks, and the various ideal blocks from the valued blockmodeling framework of Žiberna (2007).

Some ideal block types have parameters that have to be specified when used. To specify a block where the minimum density should be 0.2, type denmin(0.2). This is specified in the ideal block table below.

Usage Name Ideal block pattern Parameter Compatible methods
dnc Do-not-care Anything goes! hamming, nordlund
nul Null (0-block) Empty of ties hamming, nordlund
com Complete (1-block) Filled of ties hamming, nordlund
reg Regular At least one tie in each row and column hamming, nordlund
rre Row-regular At least one tie in each row hamming, nordlund
cre Column-regular At least one tie in each column hamming, nordlund
rfn Row-functional Exactly one tie in each row hamming, nordlund
cfn Column-functional Exactly one tie in each column hamming, nordlund
denuci(d) Ucinet-style exact density Filled with ties, all with value d d: 'density' (ideal tie value) nordlund
den(d) Exact density Block density close to d d: ideal density hamming, nordlund
denmin(d) Minimum density Block density at least d d: minimum density hamming, nordlund
pco(p) p-core Proportional version of k-core p: proportion nordlund
pcdd Dependency/Dominance(PC) Specialized periphery-to-core block* nordlund
cpdd Dependency/Dominance(CP) Specialized core-to-periphery block* nordlund

*: The two specialized blocks for peripheral dependency and core dominance are automatically created by the coreperi() function when using the 'powerrelational=depdom' option. It is recommended to only use them in such situations.

[To top]

File formats and data objects is prepared to work with several types of data objects, but for the current version, the relevant types are Matrix, BlockImage, Partition, and BlockModel objects. Once created in Socnet, these objects can be saved as individual text files using the save(), each type saved in a simple-to-read text file. Having saved these files, they can subsequently be loaded into Socnet.

To get your own networks into Socnet, you need to know how these text files look like, so that you can prepare your data accordingly.

Matrix objects

The Matrix object corresponds to a network, which can be either binary or valued and is always treated as directional. The file format of Matrix objects consist of a square tab-separated text table, where both the first row and column contain the actors. Row actors represents the source of ties, and column actors represent their destinations. By default, the tab character (\t) is used to separate cells, but that can be adjusted when loading. However, the comma (,) can not be used to separate values.

This is the file format of the befig1.txt example network:

	1	2	3	4	5	6	7	8	9	10
1	0	1	1	1	1	0	0	0	0	0
2	1	0	1	1	0	1	1	1	0	0
3	1	1	0	1	0	0	0	1	1	0
4	1	1	1	0	1	0	0	0	0	1
5	1	0	0	1	0	0	0	0	0	0
6	0	1	0	0	0	0	0	0	0	0
7	0	1	0	0	0	0	0	0	0	0
8	0	1	1	0	0	0	0	0	0	0
9	0	0	1	0	0	0	0	0	0	0
10	0	0	0	1	0	0	0	0	0	0

BlockImage objects

The BlockImage object represents the broad patterns of a blockmodel, and can either be a potential blockimage that will be examined, or a found structure from a search. BlockImages can be either singleblocked or multiblocked. When loading a blockimage text file, identifies the size and content of the blocks. Blockimages that are to be used in a search are typically created in Socnet, using the blockimage() function, but they are also extracted from the results from a search.

Below are the files for two separate blockimages: the resulting blockimage from searching the Hlebec notesharing data for a 3-positional structure, and a hypothetical 4-positional custom structure:

	P0	P1	P2
P0	nul	reg	nul
P1	nul	reg	nul
P2	nul	reg	reg

Example 2:

	P0	P1	P2
P0	denmin(0.5)	reg	nul
P1	nul	reg	reg;nul
P2	reg;nul	nul	reg;rre

Partition objects

The Partition object represents a partition of an Actorset, i.e. where the Actors are placed in a fixed set of numbered clusters. Partitions are the main findings from a blockmodel analysis, as they describe which Actors that seem most equivalent with each other. Partitions can also be created, using the partition() function, specifically to test a hypothetical blockmodel using bmtest().

Below is the file when saving the optimal partition from the default Borgatti-Everett core-periphery search of the Baker binary network:

actor	partindex
cw	0
cysr	0
jswe	0
ssr	0
scw	0
swra	0
sw	0
can	1
fr	1
cswj	1
amh	1
asw	1
bjsw	1
pw	1
ccq	1
jgsw	1
jsp	1
swg	1
swhc	1
ijsw	1

BlockModel objects

BlockModel objects in correspond to a particular blockmodeling solution. This means that each BlockModel object has multiple internal objects: a Matrix (representing the sorted version of the Matrix that was analyzed), a second Matrix (representing the ideal binary blockmodel), a BlockImage (representing the singleblocked blockimage that fits the solution), and a Partition (representing the particular partition of actors into the different positions of the Blockimage). To save these individual objects, they first have to be extracted first using the bmextract() function.

[To top]