# Research Journal: Oct 9 - Oct 15

### That awkward feeling of seeing your old code and not recognizing it

# Unresolved Questions

What are a few critical situations that should be simulated as test cases?

How should the N-of-1 context fit into this new mixed-effect Bayesian Network?

How does Jari’s code work?

What basic functionality needs to be in mebnlearn before it can be used to simulate?

What metrics should be used in the simulations?

# October 9

I started to read Bae’s **Learning Bayesian Networks from Correlated Data** and Turkia’s **Mixed‑effect Bayesian network reveals personal effects of nutrition** because I need to understand this new model in order to implement it for my own paper.

The first major realization after reading the paper how the random effects are incorporated into the model. All nodes in the graph gain new parents that represent the random effect. What’s not clear to me is how something like random-slopes can get incorporated into the model, or if this is something I actually need to worry about.

New questions that need to be resolved:

How do I learn the structure of the network?

How do I learn the parameters for each of the networks?

Sonia wants us to take a fully Bayesian approach, so I’m pretty sure the latter will be answered by some code I found on the internet and having to translate it into my own code. It feels like this will take much longer than I need, but I have to make sure to read the paper and get my hands dirty each day. That’s the only way to do it nowadays.

# October 10

I found Jari Turkia’s talk on his work. It doesn’t go into much detail about the code, but it does back up the idea that we just build the network on node at a time. I think looking at how regular Bayesian networks are made in Stan will be helpful for understanding his own code.

I found two answers to my question from October 9, “How do I learn the structure of the network?” In their paper, Bae et al use the K2 algorithm, which is a “forward search” algorithm developed by Cooper Herskovits. Turia goes with a different (fully Bayesian) approach, and assumes a densely connected network. They then use shrinkage prior to remove any connects that have small strength. I want to use the former algorithm because I don’t want to assume the bipartite structure used by Turkia. I know that the bnlearn library uses a greedy Hill Climbing algorithm, so this is also another avenue that I might go with. There’s many routes to go down right now, so I think they need to be prioritized.

Examine the bayesvl library to see if it might be a good out-of-the-box solution for creating the mixed effects Bayesian Network. This doesn’t solve the structure learning problem, but provides an easy way to convert a structure into Stan code.

Figure out the structure learning problem yourself and try to implement it using a combination of Bae and Turkia’s work. Turkia’s code does the parameter learning procedure, but it does not learn structure. Definitely the harder of the two options, but probably a better learning experience.

# October 11

I tried reading through a few papers to see if there was a particular Bayesian algorithm that I could refer for structure learning. I did find one (Friedman 2003), but the notation and math is a little over my head. Not the most ideal sign, but it might be a hump I need to climb over. A slow climb unfortunately.

Later on in the day, I discovered a new page on the bnlearn site that discussed the use of mixed-effects models. This is exactly what I wanted, but I need to note to myself that the approach is probably frequentist, and not what Sonia will be hyped about. I’ll have a read through this tomorrow, but it’s a fortunate find on my part.

On a personal note, this part of research feels especially painful on the mental and emotional front. I am finding it hard to sit down and read through the papers in a meaningful way, especially when I encounter some mathematical notation. I know that I should approach math as another language, but it’s a reflex from first-year to feel like part of my brain has shut off. I really want to be comfortable with this, but I also don’t know where to start. But I guess the only place to start is somewhere, right?

# October 12

THE CODE ALSO HAD A PAPER ASSOCIATED WITH IT. You know for sure that that’s going to be the main focus of today. I feel good about this one (not just because the author is the same person who created the library I used for my paper). I feel so alive.

Papers from the Machine Learning journal are thankfully easy to read. I was able to get through the notation and simulations relatively easily, so I could move onto the task of reading and understanding the associated code. The paper and code solve the following questions:

~~How do I learn the structure of the network?~~~~How do I learn the parameters for each of the networks?~~

And with this code, I won’t need to implement my own library, so I can strike these questions:

~~How does Jari’s code work? (for now)~~~~What basic functionality needs to be in mebnlearn before it can be used to simulate?~~

The main questions now deal with figuring out the key simulations that I need to make for the presentation and for the paper.

What are a few critical situations that should be simulated as test cases?

How should the N-of-1 context fit into this new mixed-effect Bayesian Network?

# October 13

Now that I have code to generate the mixed-effect Bayesian networks, it’s time to think about the ways that we can show that they are useful models in an N-of-1 context. I’ll layout some of my thinking here:

The point of an N-of-1 trial is to find the best treatment on

an individual level. Each person is expected toreact differentlyto a new treatment or intervention. The standard method for analyzing N-of-1 data is throughmixed models or a GEE.

An extra wrinkle that I need to consider is that the data I’m using is a behavioral N-of-1. There’s other variables of interest that were measured, and we might be interested in seeing if they can also influence the outcome. One simulation to this is to demonstrate this with the mixed effect BNs is to **show that the random effects for these non-treatment features are significantly different from 0 for some individuals, while they may be zero for others.** The same Bayesian Network will be built for everyone, but these random effects will allow individual differences.

I think I have to learn into the predictive element of the models as well. Bayesian networks will be more beneficial when there is actually an interconnectedness between the nodes. The mixed models will benefit more when there is only the linear dependency. We’ll do a train-test split of the data in these two different contexts and hopefully show that the mixed-effect BNs are better.

I spent some time refamiliarizing myself with the data today as well since it’s been a while. I’ll need to do this since I need to do the analysis for the metrics above later.

# October 14

The hectic planning of my family makes it a monumental task to do work. I’m okay with not doing as much work this weekend since I was productive earlier in the week.

# October 15

Didn’t really get any work done since I was at a wedding.