Hi,
I have set up a Brownian dynamics optimizer as follow:
> m = IMP.kernel.Model() > > # Brownian Dynamics > bd=IMP.atom.BrownianDynamics(m) > bd.set_maximum_time_step(1000) > > for i in range(1, nrounds): > e = bd.optimize(10)
Is it possible in IMP to parallelize the code to make it faster? I have seen that there is the possibility to use OpenMP but I am not really sure how to implement it.
Thanks, Davide
On 07/16/2014 02:11 AM, Davide Baù wrote: > I have set up a Brownian dynamics optimizer as follow: > >> m = IMP.kernel.Model() >> >> # Brownian Dynamics >> bd=IMP.atom.BrownianDynamics(m) >> bd.set_maximum_time_step(1000) >> >> for i in range(1, nrounds): >> e = bd.optimize(10) > > Is it possible in IMP to parallelize the code to make it faster?
If you really require the output of one optimization to be the input of the next, then no, short of parallelizing the restraint evaluation itself. Currently the only way to do that is via OpenMP.
If you can instead run nrounds optimizations starting from different (e.g. random) starting conditions, parallelization is trivial. You can use IMP.parallel or simply run multiple copies of your script, e.g. with different random seeds.
Ben
So in principle this support can be added with IMP OpenMP pragma macros, though since most of the computation time typically involves the scoring function, it is probably wiser to parallelize that part of the code. I know Daniel did some work in that direction in the past but not sure where it's gotten, see also https://integrativemodeling.org/nightly/doc/html/base_2thread__macros_8h.htm...
It should in theory be pretty simple to add OpenMP statements at some key computations, based on profiling.
On Wed, Jul 16, 2014 at 12:28 PM, Ben Webb ben@salilab.org wrote:
> On 07/16/2014 02:11 AM, Davide Baù wrote: > >> I have set up a Brownian dynamics optimizer as follow: >> >> m = IMP.kernel.Model() >>> >>> # Brownian Dynamics >>> bd=IMP.atom.BrownianDynamics(m) >>> bd.set_maximum_time_step(1000) >>> >>> for i in range(1, nrounds): >>> e = bd.optimize(10) >>> >> >> Is it possible in IMP to parallelize the code to make it faster? >> > > If you really require the output of one optimization to be the input of > the next, then no, short of parallelizing the restraint evaluation itself. > Currently the only way to do that is via OpenMP. > > If you can instead run nrounds optimizations starting from different (e.g. > random) starting conditions, parallelization is trivial. You can use > IMP.parallel or simply run multiple copies of your script, e.g. with > different random seeds. > > Ben > -- > ben@salilab.org http://salilab.org/~ben/ > "It is a capital mistake to theorize before one has data." > - Sir Arthur Conan Doyle > _______________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev >
At one point at least there were openmp support within brownian dynamics (for the particle perturbations) as well as restraint evaluations.
On Wed, Jul 16, 2014 at 12:33 PM, Barak Raveh barak.raveh@gmail.com wrote:
> So in principle this support can be added with IMP OpenMP pragma macros, > though since most of the computation time typically involves the scoring > function, it is probably wiser to parallelize that part of the code. I know > Daniel did some work in that direction in the past but not sure where it's > gotten, see also > https://integrativemodeling.org/nightly/doc/html/base_2thread__macros_8h.htm... > > It should in theory be pretty simple to add OpenMP statements at some key > computations, based on profiling. > > > On Wed, Jul 16, 2014 at 12:28 PM, Ben Webb ben@salilab.org wrote: > >> On 07/16/2014 02:11 AM, Davide Baù wrote: >> >>> I have set up a Brownian dynamics optimizer as follow: >>> >>> m = IMP.kernel.Model() >>>> >>>> # Brownian Dynamics >>>> bd=IMP.atom.BrownianDynamics(m) >>>> bd.set_maximum_time_step(1000) >>>> >>>> for i in range(1, nrounds): >>>> e = bd.optimize(10) >>>> >>> >>> Is it possible in IMP to parallelize the code to make it faster? >>> >> >> If you really require the output of one optimization to be the input of >> the next, then no, short of parallelizing the restraint evaluation itself. >> Currently the only way to do that is via OpenMP. >> >> If you can instead run nrounds optimizations starting from different >> (e.g. random) starting conditions, parallelization is trivial. You can use >> IMP.parallel or simply run multiple copies of your script, e.g. with >> different random seeds. >> >> Ben >> -- >> ben@salilab.org http://salilab.org/~ben/ >> "It is a capital mistake to theorize before one has data." >> - Sir Arthur Conan Doyle >> _______________________________________________ >> IMP-dev mailing list >> IMP-dev@salilab.org >> https://salilab.org/mailman/listinfo/imp-dev >> > > > > -- > Barak > > _______________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev > >
I think using openmp pragmas or even IMP_OMP_PRAGMA macros in your code, and calling cmake with -DCMAKE_CXX_FLAGS="-fopenmp" should do the trick.
Yannick
Le 17/07/14 05:18, Daniel Russel a écrit : > At one point at least there were openmp support within brownian > dynamics (for the particle perturbations) as well as restraint > evaluations. > > > On Wed, Jul 16, 2014 at 12:33 PM, Barak Raveh <barak.raveh@gmail.com > mailto:barak.raveh@gmail.com> wrote: > > So in principle this support can be added with IMP OpenMP pragma > macros, though since most of the computation time typically > involves the scoring function, it is probably wiser to parallelize > that part of the code. I know Daniel did some work in that > direction in the past but not sure where it's gotten, see > also https://integrativemodeling.org/nightly/doc/html/base_2thread__macros_8h.htm... > > > It should in theory be pretty simple to add OpenMP statements at > some key computations, based on profiling. > > > On Wed, Jul 16, 2014 at 12:28 PM, Ben Webb <ben@salilab.org > mailto:ben@salilab.org> wrote: > > On 07/16/2014 02:11 AM, Davide Baù wrote: > > I have set up a Brownian dynamics optimizer as follow: > > m = IMP.kernel.Model() > > # Brownian Dynamics > bd=IMP.atom.BrownianDynamics(m) > bd.set_maximum_time_step(1000) > > for i in range(1, nrounds): > e = bd.optimize(10) > > > Is it possible in IMP to parallelize the code to make it > faster? > > > If you really require the output of one optimization to be the > input of the next, then no, short of parallelizing the > restraint evaluation itself. Currently the only way to do that > is via OpenMP. > > If you can instead run nrounds optimizations starting from > different (e.g. random) starting conditions, parallelization > is trivial. You can use IMP.parallel or simply run multiple > copies of your script, e.g. with different random seeds. > > Ben > -- > ben@salilab.org mailto:ben@salilab.org > http://salilab.org/~ben/ http://salilab.org/%7Eben/ > "It is a capital mistake to theorize before one has data." > - Sir Arthur Conan Doyle > _______________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org mailto:IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev > > > > > -- > Barak > > _______________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org mailto:IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev > > > > > _______________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev
participants (5)
-
Barak Raveh
-
Ben Webb
-
Daniel Russel
-
Davide Baù
-
Yannick Spill