Feature Requests

  1. Add the ExclusivelyUses attribute to the NCrunchConfiguration settings as an alternative to using the attribute.

    If you could control the ExclusivelyUses values for a test, project, or assembly from the NCrunch configuration settings as well as using the attribute, the projects and solutions would not need references to the NCrunch.Framework.dll. So, for teams where only some developers are using NCurnch, the settings are kept for that user.

    3 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  2. Show code coverage metric in the status bar next to "N"

    It would save a few clicks when code coverage would be displayed in the status bar next to "N" or instead of green "N".
    Besides, it would be a constant reminder of the current Code Coverage before the commits.

    12 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  3. xUnit tests search: take into account attributes derived from [Fact]

    xUnit tests search: take into account attributes derived from [Fact]

    Details

    xUnit allows to extend its functionality by deriving from [Fact] attribute, e.g. is allows to generate test case parameters by using such approach. It even has OOTB [Theory] attribute which is derived from [Fact] (see https://github.com/paulecoyote/xunit/blob/master/Main/xunit.extensions/DataTheories/TheoryAttribute.cs).

    The problem is that NCrunch does not "see" tests marked by attributes derived from [Fact] (e.g. [Theory]) unless you explicitly have at least one "Fact" test in your test assembly.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Add coverage aggregates to the xml report produced by the console runner

    The output of the console runner contains HTML and raw xml reports.

    The HTML report is nice, however the contents of the XML files is a bit too detailed to get statistics from it.

    It would be nice to have the same kind of data that is in the HTML file but in XML form (meaning, the % coverage per method/class/assembly/global). Something similar to the DotCover XML or JSON reports. In fact if the format was exactly the same as the dotcover reports we could leverage tools that can already process these files, like SonarQube.

    6 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  5. Add a console runner nuget package

    A console runner package would make integration in build systems simpler.

    The would be no need to install the runner on each agent, just get the package from nuget and run it.

    7 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  6. The ability to mass-unpin tests

    NCrunch is great, but there is one annoying thing in the GUI - there is no option to unpin (and pin) tests by right-clicking on the whole namespace (or even root namespace). You can only pin/unpin individual test methods or test classes.

    6 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  7. Allow the console tool to be run continuously

    Im super happy with ncrunch console tool that was just released:

    http://www.ncrunch.net/documentation/tools_console-tool

    However i would also like it to be able to run as a deamon continuously watching a specified folder/solution using NCrunch excellent incremental build/test features.

    Ideally it would output a maximum of N failing tests:

    1: *. [NameSpace].Test failed | at line 12 of filename.
    2: ....

    Ideally it would also support verbose output in json/xml so that plugins for other editors can be written.

    16 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  8. Provide history of failed tests

    When working on a project where NCrunch is running (all) the tests automatically in the background, some tests occassionally fail and then suddenly pass.

    I would like to have some kind of history for failed tests so that I can even later on delve further into the failed tests to figure out what's going wrong.

    In addition it would be nice to be able to configure the time span (or number of tests runs) for that historical data to limit the memory consumed (I could imagine that using that feature on large projects will have a significant memory impact).

    6 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Flag idea as inappropriate…  ·  Admin →
  9. Provide a smoother upgrade path for Grid Node Server - (backwards) compatability or run in parallel.

    At the moment, the grid node servers must be updated at the same time as all VS clients.

    Either allow mismatched versions of NCrunch in VS and distributed nodes, or allow multiple instances of the grid node server on the same machine.

    This is holding back our adoption of v2.8 - whether we install the server first or ask developers to upgrade their client first, we have a period where nobody has distributed processing.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  10. Allow navigation to test in test window

    From the test status gutter or test name, I would like to be able to navigate to the test (right click or hotkey) in the test status window so that I can quickly ignore/unignore all tests in a test class easily.

    5 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  11. Edit entries in 'Additional Files to Include'

    Put an Edit button in the Additional Files to Include dialog which is opened from the NCrunch Configuration window.

    Would allow you to edit the path of an existing entry in the list, just now I have to remove and then re-add which is a little frustrating.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  12. Attribute to enable tests with "shared resource" to run on same server within same "session".

    Original thread: http://forum.ncrunch.net/yafpostst1368Restrict-multiple-tests-to-run-within-same-grid-node-in-console-runner.aspx

    This is primarily for the console test runner (or e.g engine mode "run all tests within automatically").

    Scenario: I have some integration tests using large test data. This test data is created once and then reused between these tests. When using grid nodes tests using same "test data resource" might end up running in different grid nodes, causing each test creating new test data (which makes the tests slower than if they were run in serial).

    It would be nice to have an attribute e.g. [SharedResource("TestBlob")] making all tests with same "shared resource" to be…

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  13. Allow specflow scenario outline examples to have their own 'dot marker' in the margin

    currently if you have a scenario outline that has several examples there is no way to run a specific example as a test, I can only see the dots in the actual scenario outline and have to then run 'x' number of tests at once (where 'x' is the number of scenario examples)

    I'd like a 'dot' by each line in the examples so I could run a specific scenario example on its own or see which example is actually failing

    9 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Flag idea as inappropriate…  ·  Admin →
  14. Show the PID of process in which a test is running

    I whish to be able to see the PID of the currently running test when I break in it. Thus to allow to attach a performance analyzer.

    See also:
    http://forum.ncrunch.net/yafpostst1362How-to-identify-the-ProcessId--PID--in-which-a-test-is-running.aspx

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  15. Using XBox as NCrunch Grid node server

    When i'm programming my xbox is idle. it would be awesome if I could run ncrunch builds and test on it.

    10 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  16. Clean up the noise when an AssertFailedException (or similar) is thrown

    When an assertion in a test fails, you get the full stack trace of the AssertFailedException, which is noisy. For instance:

    Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException: Assert.AreEqual failed. Expected:<1200>. Actual:<900>.
    at Microsoft.VisualStudio.TestTools.UnitTesting.Assert.HandleFail(String assertionName, String message, Object[] parameters)
    at Microsoft.VisualStudio.TestTools.UnitTesting.Assert.AreEqualT
    at Microsoft.VisualStudio.TestTools.UnitTesting.Assert.AreEqualT
    at TestInheritance.Tests.RectangleTests.AreaAfterSettingHeightWidth() in d:[some likely really long path]\RectangleTests.cs:line 22

    Of all the above information, the only pertinent information is the message text of the exception and the bottom of the stack where the assertion failure happened. For instance:

    Assert.AreEqual failed. Expected:<1200>. Actual:<900>.
    at TestInheritance.Tests.RectangleTests.AreaAfterSettingHeightWidth() in d:\users\floyd.may\documents\visual studio 2012\Projects\TestInheritance\TestInheritance.Tests\RectangleTests.cs:line 22

    Ideally, for MSTest, NUnit, etc., each testing frameworks' specific assert fail exceptions…

    14 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  17. Ignore Categories of tests

    Note that this isn't the same thing as the enging mode not having a Category automatically run, etc. See http://forum.ncrunch.net/yafpostsm6015Ignore-categories-of-tests.aspx for more info.

    21 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  18. Use consistent wording in settings

    When you describe the Spinner colours you use inconsistent language:
    For example,
    Outer circle colour on build failure
    AND
    Text colour when the build has failed

    Should be:
    Text colour on build failure

    3 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  19. Visual marker in far left margin when code is collapsed to definition

    For those developers who do code maintenance by collapsing code to definitions (use Outlining Collapse) we can't see which tests in the file are failing at a glance. To see that we have to expand all the code to see the red/green markers.
    All that would be needed is to promote a red or green top level marker to the far left margin to show us what test is failing in the file (when test method is collapsed).

    7 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. Reorganize settings based on events, not entities

    E.g.,
    On build Failure:

    Outer Circle Colour   – Dark Brown
    
    Text Colour – Green
    Line Marker Colour – Brown
    Background music – Ride of the Valkyries

    On Test Failure

    Outer Circle Colour   – Red
    
    Text Colour – Black
    Line Marker Colour – Red
    Background music – God Defend Visual Studio

    ….

    The reason is that humans are conditioned to think in terms of events. When “this” happens, I want “That”.
    I.e., instead of thinking of
    an entity, and all the events that can affect it,
    we human like to consider
    an event, and all the entities that are affected by…

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feature Requests

Categories

Feedback and Knowledge Base