Number of failed tests sounds like a great idea to me. Still, it is problematic that we get a "failure" notice every morning when all we
have are known issues, it means that nobody cares what's in those tests
anymore. I think Ben's decorator sounds like a great general solution to me (perhaps we can also mark failures in specific system builds, so we know if something shows up in another system). We can still keep known test failures in the report in parantheses, but
this way the system will explicitly be aware of new failures possibly
caused by careless changes to code, as opposed to known failed tests explicitly marked by a programmer.
On Wed, Aug 15, 2012 at 11:02 AM, Barak Raveh <barak.raveh@gmail.com> wrote:
guys, let's try to get one day of TOTALLY clean tests :) can people in charge of cgal, isd and multifit (Fast build only) give a quick look to see if they can quickly solve the test issues (the atom case is known, and if other tests are solved, we'll handle that too).The cgal one is something that does not work with the version of cgal installed there. A test failure on that platform seems like the most correct situation.
If there's a complicated issue behind these failures, we can temporarily disable those tests until the relevant person get to work on it.Why would that be an improvement (in the case where something in the module is broken as opposed to the test being a bad test)? If something is broken, shouldn't we have a test failing to tell us that?
That said, I would really like more fine grained display (eg number of failing tests). That way one could tell at a glance when new tests broke.