Re: [IMP-dev] [IMP-build] Build system failure, 08/15/2012
guys, let's try to get one day of TOTALLY clean tests :) can people in charge of* cgal, isd and multifit *(Fast build only)** give a quick look to see if they can quickly solve the test issues (the atom case is known, and if other tests are solved, we'll handle that too). If there's a complicated issue behind these failures, we can temporarily disable those tests until the relevant person get to work on it. B
On Wed, Aug 15, 2012 at 7:01 AM, Notification of IMP build failures < imp-build@salilab.org> wrote:
> Build system failed on 08/15/2012. > Failed components: IMP. > Please see http://salilab.org/imp/nightly/tests.html for > full details. > > IMP module failure summary (BUILD = failed to build; > TEST = failed tests; DISAB = disabled due to wrong configuration; > skip = not built on this platform; only modules that failed on > at least one architecture are shown) > Lin32 Lin64 Mac Mac64 Win32 Fast Cov > cgal - TEST skip TEST skip - TEST > atom TEST TEST - TEST TEST TEST - > multifit - - - - - TEST - > isd TEST TEST TEST TEST TEST TEST TEST > _______________________________________________ > IMP-build mailing list > https://salilab.org/mailman/listinfo/imp-build >
On 8/15/12 11:02 AM, Barak Raveh wrote: > guys, let's try to get one day of TOTALLY clean tests :) can people in > charge of*cgal, isd and multifit *(Fast build only)** give a quick look > to see if they can quickly solve the test issues (the atom case is > known, and if other tests are solved, we'll handle that too). If there's > a complicated issue behind these failures, we can temporarily disable > those tests until the relevant person get to work on it.
That's exactly what I will do, now that I have the test infrastructure running properly again. The MultiFit failures are not simple ones, but I will fix them. The ISD tests have been broken since day one, but hopefully Yannick will get around to fixing them...
BTW, never disable tests by removing them or commenting out. There is an expected failure decorator you can use for such tests (or you can skip them if they only fail some of the time). Such tests get logged so we know we still have to fix them in future.
Ben
Ben, we need this decorator also for atom (it is something with BD which is probably nothing) - can you explain how to use it (or just send me to an example code) Cheers Barak
On Wed, Aug 15, 2012 at 11:16 AM, Ben Webb ben@salilab.org wrote:
> On 8/15/12 11:02 AM, Barak Raveh wrote: > >> guys, let's try to get one day of TOTALLY clean tests :) can people in >> charge of*cgal, isd and multifit *(Fast build only)** give a quick look >> >> to see if they can quickly solve the test issues (the atom case is >> known, and if other tests are solved, we'll handle that too). If there's >> a complicated issue behind these failures, we can temporarily disable >> those tests until the relevant person get to work on it. >> > > That's exactly what I will do, now that I have the test infrastructure > running properly again. The MultiFit failures are not simple ones, but I > will fix them. The ISD tests have been broken since day one, but hopefully > Yannick will get around to fixing them... > > BTW, never disable tests by removing them or commenting out. There is an > expected failure decorator you can use for such tests (or you can skip them > if they only fail some of the time). Such tests get logged so we know we > still have to fix them in future. > > Ben > -- > ben@salilab.org http://salilab.org/~ben/ > "It is a capital mistake to theorize before one has data." > - Sir Arthur Conan Doyle > ______________________________**_________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/**listinfo/imp-devhttps://salilab.org/mailman/listinfo/imp-dev >
On 08/15/2012 11:26 AM, Barak Raveh wrote: > Ben, we need this decorator also for atom (it is something with BD which > is probably nothing) - can you explain how to use it (or just send me to > an example code)
Check out the unittest docs at python.org. IIRC, it looks something like
def my_test(self): ... my_test = expectedFailure(my_test)
Ben
Le 15/08/12 21:33, Ben Webb a écrit : > On 08/15/2012 11:26 AM, Barak Raveh wrote: >> Ben, we need this decorator also for atom (it is something with BD which >> is probably nothing) - can you explain how to use it (or just send me to >> an example code) > > Check out the unittest docs at python.org. IIRC, it looks something like > > def my_test(self): > ... > my_test = expectedFailure(my_test) > > Ben A more elegant solution is to use decorators (http://effbot.org/pyref/def.htm). Just add @expectedFailure to the line before the test you want to skip:
@expectedFailure def my_test(self): ...
Y
On 8/16/12 2:47 AM, Yannick Spill wrote: > A more elegant solution is to use decorators
That requires a newer version of Python though, so won't work.
Ben
It's possible since python 2.4!
On 16 août 2012, at 15:41, Ben Webb ben@salilab.org wrote:
> On 8/16/12 2:47 AM, Yannick Spill wrote: >> A more elegant solution is to use decorators > > That requires a newer version of Python though, so won't work. > > Ben > -- > ben@salilab.org http://salilab.org/~ben/ > "It is a capital mistake to theorize before one has data." > - Sir Arthur Conan Doyle > _______________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev
On 8/16/12 7:11 AM, Yannick Spill wrote: > It's possible since python 2.4!
Indeed, but our base is currently 2.3.
Ben
On Aug 16, 2012, at 7:38 AM, Ben Webb ben@salilab.org wrote:
> On 8/16/12 7:11 AM, Yannick Spill wrote: >> It's possible since python 2.4! > > Indeed, but our base is currently 2.3. What still uses python 2.3?
On 08/16/2012 08:32 AM, Daniel Russel wrote: > What still uses python 2.3?
RHEL5, OS X 10.4, which are both super old of course. But there's no real pressing reason to aggressively drop Python 2.3, since unittest decorators (which as this discussion should have demonstrated are hardly used in IMP) are the only real syntax change.
Ben
On Wed, Aug 15, 2012 at 11:02 AM, Barak Raveh barak.raveh@gmail.com wrote:
> guys, let's try to get one day of TOTALLY clean tests :) can people in > charge of* cgal, isd and multifit *(Fast build only)** give a quick look > to see if they can quickly solve the test issues (the atom case is known, > and if other tests are solved, we'll handle that too).
The cgal one is something that does not work with the version of cgal installed there. A test failure on that platform seems like the most correct situation.
> If there's a complicated issue behind these failures, we can temporarily > disable those tests until the relevant person get to work on it. > Why would that be an improvement (in the case where something in the module is broken as opposed to the test being a bad test)? If something is broken, shouldn't we have a test failing to tell us that?
That said, I would really like more fine grained display (eg number of failing tests). That way one could tell at a glance when new tests broke.
Number of failed tests sounds like a great idea to me. Still, it is problematic that we get a "failure" notice every morning when all we have are known issues, it means that nobody cares what's in those tests anymore. I think Ben's decorator sounds like a great general solution to me (perhaps we can also mark failures in specific system builds, so we know if something shows up in another system). We can still keep known test failures in the report in parantheses, but this way the system will explicitly be aware of new failures possibly caused by careless changes to code, as opposed to known failed tests explicitly marked by a programmer.
Barak
On Wed, Aug 15, 2012 at 11:28 AM, Daniel Russel drussel@gmail.com wrote:
> On Wed, Aug 15, 2012 at 11:02 AM, Barak Raveh barak.raveh@gmail.comwrote: > >> guys, let's try to get one day of TOTALLY clean tests :) can people in >> charge of* cgal, isd and multifit *(Fast build only)** give a quick look >> to see if they can quickly solve the test issues (the atom case is known, >> and if other tests are solved, we'll handle that too). > > The cgal one is something that does not work with the version of cgal > installed there. A test failure on that platform seems like the most > correct situation. > > >> If there's a complicated issue behind these failures, we can temporarily >> disable those tests until the relevant person get to work on it. >> > Why would that be an improvement (in the case where something in the > module is broken as opposed to the test being a bad test)? If something is > broken, shouldn't we have a test failing to tell us that? > > That said, I would really like more fine grained display (eg number of > failing tests). That way one could tell at a glance when new tests broke. >
On Wed, Aug 15, 2012 at 11:43 AM, Barak Raveh barak.raveh@gmail.com wrote:
> Number of failed tests sounds like a great idea to me. Still, it is > problematic that we get a "failure" notice every morning when all we have > are known issues, it means that nobody cares what's in those tests anymore.
I would simply change the email title to "results" :-) Given that there are a bunch of separate modules in there maintained by different people, I don't think it makes sense to aim for a state where a single bit provides useful information about all of IMP.
changing name to "results" might not be a bad solution :) As long as we get an indication about completely new failures that's good (I would actually put those in the subject line)
On Wed, Aug 15, 2012 at 12:20 PM, Daniel Russel drussel@gmail.com wrote:
> On Wed, Aug 15, 2012 at 11:43 AM, Barak Raveh barak.raveh@gmail.comwrote: > >> Number of failed tests sounds like a great idea to me. Still, it is >> problematic that we get a "failure" notice every morning when all we have >> are known issues, it means that nobody cares what's in those tests anymore. > > I would simply change the email title to "results" :-) Given that there > are a bunch of separate modules in there maintained by different people, I > don't think it makes sense to aim for a state where a single bit provides > useful information about all of IMP. > >
Yeah, being able to quickly determine how things compared with past days would be really nice.
On Wed, Aug 15, 2012 at 12:30 PM, Barak Raveh barak.raveh@gmail.com wrote:
> changing name to "results" might not be a bad solution :) As long as we > get an indication about completely new failures that's good (I would > actually put those in the subject line) > > > On Wed, Aug 15, 2012 at 12:20 PM, Daniel Russel drussel@gmail.com wrote: > >> On Wed, Aug 15, 2012 at 11:43 AM, Barak Raveh barak.raveh@gmail.comwrote: >> >>> Number of failed tests sounds like a great idea to me. Still, it is >>> problematic that we get a "failure" notice every morning when all we have >>> are known issues, it means that nobody cares what's in those tests anymore. >> >> I would simply change the email title to "results" :-) Given that there >> are a bunch of separate modules in there maintained by different people, I >> don't think it makes sense to aim for a state where a single bit provides >> useful information about all of IMP. >> >> > > > > -- > Barak >
I should add, until we move to git or something where people start using development branches more...
On Wed, Aug 15, 2012 at 12:20 PM, Daniel Russel drussel@gmail.com wrote:
> On Wed, Aug 15, 2012 at 11:43 AM, Barak Raveh barak.raveh@gmail.comwrote: > >> Number of failed tests sounds like a great idea to me. Still, it is >> problematic that we get a "failure" notice every morning when all we have >> are known issues, it means that nobody cares what's in those tests anymore. > > I would simply change the email title to "results" :-) Given that there > are a bunch of separate modules in there maintained by different people, I > don't think it makes sense to aim for a state where a single bit provides > useful information about all of IMP. > >
On 08/15/2012 11:28 AM, Daniel Russel wrote: > That said, I would really like more fine grained display (eg number of > failing tests). That way one could tell at a glance when new tests broke.
In fact, I added that to the build system yesterday. ;) I just need to add a UI - should be there soon.
Ben
Awesome. I should reach the commit messages more carefully :-)
On Wed, Aug 15, 2012 at 12:32 PM, Ben Webb ben@salilab.org wrote:
> On 08/15/2012 11:28 AM, Daniel Russel wrote: > >> That said, I would really like more fine grained display (eg number of >> failing tests). That way one could tell at a glance when new tests broke. >> > > In fact, I added that to the build system yesterday. ;) I just need to add > a UI - should be there soon. > > > Ben > -- > ben@salilab.org http://salilab.org/~ben/ > "It is a capital mistake to theorize before one has data." > - Sir Arthur Conan Doyle > ______________________________**_________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/**listinfo/imp-devhttps://salilab.org/mailman/listinfo/imp-dev >
On 08/15/2012 11:28 AM, Daniel Russel wrote: > The cgal one is something that does not work with the version of cgal > installed there. A test failure on that platform seems like the most > correct situation.
Out of curiosity, what is the origin of the problem? Some of the tests fail with CGAL 3.8, but others with 4.0. Is there a version which does work?
Ben
Actually, now that I look at it some more the current tests On Aug 15, 2012, at 3:30 PM, Ben Webb ben@salilab.org wrote:
> On 08/15/2012 11:28 AM, Daniel Russel wrote: >> The cgal one is something that does not work with the version of cgal >> installed there. A test failure on that platform seems like the most >> correct situation. > > Out of curiosity, what is the origin of the problem? Some of the tests fail with CGAL 3.8, but others with 4.0. Is there a version which does work?
Actually, it appears I wasn't monitoring things well enough. The bad cgal versions seems to have been dropped, and it is something with compiler arguments now. Hmmm, I fail the build results monitoring test :-) Will look into it.
participants (4)
-
Barak Raveh
-
Ben Webb
-
Daniel Russel
-
Yannick Spill