-
Hi Dr. Frischkorn & Dr.Popov, In my experimental paradigm, subjects did a flanker task during the retention period so that there were two trial-types derived at the probe stage(e.g., Reproduction) : a trial where they made an error on the flanker task vs. a trial where they responded correctly on the flanker task. The set-size was fixed as 3. I was interested whether/how action errors during the retention period would affected the memory representation by implementing three-parameter models, expecting that action errors would increase the P(non-target) compared to correct condition. however, it seems that mixture3p function only asks to add three major arguments : resp_error, nt_features, and set_size. Does this mean that I can't use the mixture3p to compare correct vs. error trials and need to use mixture2p model (just to compare two conditions without adding the non-target features)? +) I just tried to feed-in setsize as correctedness (e.g., 1=correct, 2=error) but ended up getting an error: Thank you. Best, |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
Hi Yoojeong, Any predictor of the model parameters (i.e.
This would specify that the mixture weights that capture the probability of target responses ( The arguments passed to
The Using the
In addition, I would recommend to supply the I hope my response is helpful. |
Beta Was this translation helpful? Give feedback.
-
Hi Dr. Frischkorn, flanker_formula <- bmmformula(
thetat ~ 0 + flkacc + (0 + flkacc || ID),
thetant ~ 0 + flkacc + (0 + flkacc || ID),
kappa ~ 0 + flkacc + (0 + flkacc || ID)
) and the results seem to be what I expected. Following errors, that precision of the flanker_error is lower than that of flanker_correct condition. Estimate Est.Error Q2.5 Q97.5
kappa_flkaccflkCorr 7.870516 1.074922 6.802424 9.055717
kappa_flkaccflkErr 6.715926 1.107846 5.460089 8.191594 > pmem
Estimate Est.Error Q2.5 Q97.5
thetat_flkaccflkCorr 0.6907493 0.3250798 0.6370255 0.7283446
thetat_flkaccflkErr 0.5847848 0.3173381 0.5525286 0.6047095 It seems that p(mem) is more decreased and p(guess) increases following errors, and the p(target) and p(non-target) are somewhat I expeted. I have to run statistical inference further, but it's promising. My follow-up questions is, 1) Do you have any idea regarding how to set-up the nubmer of chains for this type of mixture model? I heard that some setup the number of chains as 3 x # of parameters to be estimated.
># Set up parallel sampling of mcmc chains
>options(mc.cores = parallel::detectCores())
>
>nChains <- 6 # number of chains
>
>if (nChains > parallel::detectCores()) {
> nChains <- parallel::detectCores()
>} Thans much! Best, |
Beta Was this translation helpful? Give feedback.
-
The number of chains is not related to the number of parameters. Usually the default of 4 is a good choice. In bayesian MCMC models we run multiple chains so that we can be sure that the parameter estimates have converged from different starting values. The cores argument specifies how many chains you want to run in parallel on your machine. Specifying options(mc.cores = parallel::detectCores()) tells brms to run as many chains in parallel as you have CPU cores, but less than the number of chains you specify. So if you call the model like this: options(mc.cores = parallel::detectCores())
flanker_fit <- bmm(
model = flanker_model,
data = flanker_data,
formula = flanker_formula
) will run 4 chains, and if your machine has 4 cores, it will run all chains in parallel. You can achieve the same by directly specifying the arguments to bmm: flanker_fit <- bmm(
model = flanker_model,
data = flanker_data,
formula = flanker_formula,
chains = 6,
cores = 3
) the above code for example will run 6 chains total, with 3 chains running in parallel - so 3 chains will run, and when they finish, the other 3 will run next. Usually using the default number of 4 chains with cores = 4, if your machine has enough cores, is a good approach. modelGood to see that your model is running. Be sure to investigate whether the model has converged - all parameter estimates should have an Rhat value less than 1.01, and you should not get warnings about divergent transitions. just a clarfication - |
Beta Was this translation helpful? Give feedback.
-
btw, to help see your code better, it helps to wrap it in ``` marks like this:
which displays it a nice separate box like you see in the comments above (I edited your comment to add these) |
Beta Was this translation helpful? Give feedback.
The number of chains is not related to the number of parameters. Usually the default of 4 is a good choice. In bayesian MCMC models we run multiple chains so that we can be sure that the parameter estimates have converged from different starting values.
The cores argument specifies how many chains you want to run in parallel on your machine. Specifying
tells brms to run as many chains in parallel as you have CPU cores, but less than the number of chains you specify. So if you call the model like this:
w…