Hello,
I'm trying to do some modelling using 6 templates with very low sequence id (<30), so I wan't to generate as many models as possible. I was thinking 50 in the first instance (unless someone recommends a lot more). I have decided to send the job to a cluster (parallel machines) I have access to in the interest of time, but before doing so I wanted to clarify some things.
Am I correct in assuming I need to create 50 diff script files, each with a different STARTING_MODEL and ENDING_MODEL, and in each script setting a different random number seed?
ie. script 1 STARTING_MODEL = 1 ENDING_MODEL = 1
script 2 STARTING_MODEL = 2 ENDING_MODEL = 2 .....and so on
Also, if I do this, will all models be intercomparable, because I thought you were only able to compare models of the same run. Could someone briefly explain why this is or isn't the case?
And also, one last question. Should I be worried about log files over-writing each other, because I think they will all go to the same directory in the cluster? Is there maybe some line I can add to each script do direct the output into its own folder? (As you probably can tell from that question, my background is not in computing!)
Thanks in advance. I appreciate any help anyone can give.
-Zoe