Skip to content

Feature Requests

297 results found

  1. Create a 'Kill Currently Running Tests and Execute Selected Tests Immediately' button

    The existing 'Run selected test(s) in new process' button just gives the selected tests high priority. This is very irritating when longer running tests are clogging the queue. I often go to the Processing Queue to kill the tests I don't care about so that my test will start sooner. Worse, sometimes the test I want runs together with long tests I don't want, so I can't kill just the long tests without removing my requested test as well. Just run my test, right now! The others can wait.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Support IntelliTrace when debugging tests

    At the moment I am unable to make use of IntelliTrace when debugging tests using the NCrunch test runner.

    I use IntelliTrace frequently while debugging, and find it especially useful when I want to look into ADO.Net calls from ORMs.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Cleanup of workspace

    For faster unit-testing, I am using a RAM-drive as workspace, with 4GB space.

    NCrunch keeps all old build artifacts around until they are no longer needed. In practice, this means that as I am working, the workspace keeps on filling up, since a change in one place which doesn't impact something else means both versions are kept in the workspace.

    As a result of this, the 4GB quickly get gobbled up, forcing me to cleanup by deactivating/activating NCrunch (which means everything gets rebuilt, even though this isn't necessary) or manually trying to delete as much as possible in the filesystem.

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Add the ExclusivelyUses attribute to the NCrunchConfiguration settings as an alternative to using the attribute.

    If you could control the ExclusivelyUses values for a test, project, or assembly from the NCrunch configuration settings as well as using the attribute, the projects and solutions would not need references to the NCrunch.Framework.dll. So, for teams where only some developers are using NCurnch, the settings are kept for that user.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Show code coverage metric in the status bar next to "N"

    It would save a few clicks when code coverage would be displayed in the status bar next to "N" or instead of green "N".
    Besides, it would be a constant reminder of the current Code Coverage before the commits.

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. xUnit tests search: take into account attributes derived from [Fact]

    xUnit tests search: take into account attributes derived from [Fact]

    Details

    xUnit allows to extend its functionality by deriving from [Fact] attribute, e.g. is allows to generate test case parameters by using such approach. It even has OOTB [Theory] attribute which is derived from [Fact] (see https://github.com/paulecoyote/xunit/blob/master/Main/xunit.extensions/DataTheories/TheoryAttribute.cs).

    The problem is that NCrunch does not "see" tests marked by attributes derived from [Fact] (e.g. [Theory]) unless you explicitly have at least one "Fact" test in your test assembly.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Add a console runner nuget package

    A console runner package would make integration in build systems simpler.

    The would be no need to install the runner on each agent, just get the package from nuget and run it.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Allow the console tool to be run continuously

    Im super happy with ncrunch console tool that was just released:

    http://www.ncrunch.net/documentation/tools_console-tool

    However i would also like it to be able to run as a deamon continuously watching a specified folder/solution using NCrunch excellent incremental build/test features.

    Ideally it would output a maximum of N failing tests:

    1: *. [NameSpace].Test failed | at line 12 of filename.
    2: ....

    Ideally it would also support verbose output in json/xml so that plugins for other editors can be written.

    16 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Provide history of failed tests

    When working on a project where NCrunch is running (all) the tests automatically in the background, some tests occassionally fail and then suddenly pass.

    I would like to have some kind of history for failed tests so that I can even later on delve further into the failed tests to figure out what's going wrong.

    In addition it would be nice to be able to configure the time span (or number of tests runs) for that historical data to limit the memory consumed (I could imagine that using that feature on large projects will have a significant memory impact).

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Provide a smoother upgrade path for Grid Node Server - (backwards) compatability or run in parallel.

    At the moment, the grid node servers must be updated at the same time as all VS clients.

    Either allow mismatched versions of NCrunch in VS and distributed nodes, or allow multiple instances of the grid node server on the same machine.

    This is holding back our adoption of v2.8 - whether we install the server first or ask developers to upgrade their client first, we have a period where nobody has distributed processing.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Allow navigation to test in test window

    From the test status gutter or test name, I would like to be able to navigate to the test (right click or hotkey) in the test status window so that I can quickly ignore/unignore all tests in a test class easily.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Attribute to enable tests with "shared resource" to run on same server within same "session".

    Original thread: http://forum.ncrunch.net/yaf_postst1368_Restrict-multiple-tests-to-run-within-same-grid-node-in-console-runner.aspx

    This is primarily for the console test runner (or e.g engine mode "run all tests within automatically").

    Scenario: I have some integration tests using large test data. This test data is created once and then reused between these tests. When using grid nodes tests using same "test data resource" might end up running in different grid nodes, causing each test creating new test data (which makes the tests slower than if they were run in serial).

    It would be nice to have an attribute e.g. [SharedResource("TestBlob")] making all tests with same "shared resource" to be run within…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Allow specflow scenario outline examples to have their own 'dot marker' in the margin

    currently if you have a scenario outline that has several examples there is no way to run a specific example as a test, I can only see the dots in the actual scenario outline and have to then run 'x' number of tests at once (where 'x' is the number of scenario examples)

    I'd like a 'dot' by each line in the examples so I could run a specific scenario example on its own or see which example is actually failing

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Show the PID of process in which a test is running

    I whish to be able to see the PID of the currently running test when I break in it. Thus to allow to attach a performance analyzer.

    See also:
    http://forum.ncrunch.net/yaf_postst1362_How-to-identify-the-ProcessId--PID--in-which-a-test-is-running.aspx

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Using XBox as NCrunch Grid node server

    When i'm programming my xbox is idle. it would be awesome if I could run ncrunch builds and test on it.

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Clean up the noise when an AssertFailedException (or similar) is thrown

    When an assertion in a test fails, you get the full stack trace of the AssertFailedException, which is noisy. For instance:

    Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException: Assert.AreEqual failed. Expected:<1200>. Actual:<900>.
    at Microsoft.VisualStudio.TestTools.UnitTesting.Assert.HandleFail(String assertionName, String message, Object[] parameters)
    at Microsoft.VisualStudio.TestTools.UnitTesting.Assert.AreEqual[T](T expected, T actual, String message, Object[] parameters)
    at Microsoft.VisualStudio.TestTools.UnitTesting.Assert.AreEqual[T](T expected, T actual)
    at TestInheritance.Tests.RectangleTests.AreaAfterSettingHeightWidth() in d:[some likely really long path]\RectangleTests.cs:line 22

    Of all the above information, the only pertinent information is the message text of the exception and the bottom of the stack where the assertion failure happened. For instance:

    Assert.AreEqual failed. Expected:<1200>. Actual:<900>.
    at TestInheritance.Tests.RectangleTests.AreaAfterSettingHeightWidth() in d:\users\floyd.may\documents\visual studio 2012\Projects\TestInheritance\TestInheritance.Tests\RectangleTests.cs:line 22

    Ideally, for MSTest, NUnit,…

    14 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Use consistent wording in settings

    When you describe the Spinner colours you use inconsistent language:
    For example,
    Outer circle colour on build failure
    AND
    Text colour when the build has failed

    Should be:
    Text colour on build failure

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Visual marker in far left margin when code is collapsed to definition

    For those developers who do code maintenance by collapsing code to definitions (use Outlining Collapse) we can't see which tests in the file are failing at a glance. To see that we have to expand all the code to see the red/green markers.
    All that would be needed is to promote a red or green top level marker to the far left margin to show us what test is failing in the file (when test method is collapsed).

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Reorganize settings based on events, not entities

    E.g.,
    On build Failure:
    Outer Circle Colour – Dark Brown
    Text Colour – Green
    Line Marker Colour – Brown
    Background music – Ride of the Valkyries

    On Test Failure
    Outer Circle Colour – Red
    Text Colour – Black
    Line Marker Colour – Red
    Background music – God Defend Visual Studio
    ….

    The reason is that humans are conditioned to think in terms of events. When “this” happens, I want “That”.
    I.e., instead of thinking of
    an entity, and all the events that can affect it,
    we human like to consider
    an event, and all the entities that are affected by…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Show number of lines uniquely covered by a test

    Today, the test fail/pass indicators on each line show which tests are executing that line.

    I would like to have an option (maybe in the Metrics window) which tells me the number of lines covered solely by one test.

    My goal behind is to identify code that is only run by few system/integration tests and clearly needs more tests. Such tests might start/run half of the system just to prepare a test so that - from the coverage point of view - the code looks like to be covered but - from the testing point of view - actually isn't…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feature Requests

Categories

Feedback and Knowledge Base