terminal/build/pipelines/templates/build-console-steps.yml

214 lines
8.4 KiB
YAML
Raw Normal View History

parameters:
additionalBuildArguments: ''
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
testLogPath: '$(Build.BinariesDirectory)\$(BuildPlatform)\$(BuildConfiguration)\testsOnBuildMachine.wtl'
steps:
- checkout: self
2019-05-07 16:57:46 +02:00
submodules: true
clean: true
- task: NuGetToolInstaller@0
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
displayName: 'Use NuGet 5.2.0'
inputs:
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
versionSpec: 5.2.0
# In the Microsoft Azure DevOps tenant, NuGetCommand is ambiguous.
# This should be `task: NuGetCommand@2`
- task: 333b11bd-d341-40d9-afcf-b32d5ce6f23b@2
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
displayName: Restore NuGet packages for solution
inputs:
command: restore
feedsToUse: config
configPath: NuGet.config
restoreSolution: OpenConsole.sln
restoreDirectory: '$(Build.SourcesDirectory)\packages'
Split `TermControl` into a Core, Interactivity, and Control layer (#9820) ## Summary of the Pull Request Brace yourselves, it's finally here. This PR does the dirty work of splitting the monolithic `TermControl` into three components. These components are: * `ControlCore`: This encapsulates the `Terminal` instance, the `DxEngine` and `Renderer`, and the `Connection`. This is intended to everything that someone might need to stand up a terminal instance in a control, but without any regard for how the UX works. * `ControlInteractivity`: This is a wrapper for the `ControlCore`, which holds the logic for things like double-click, right click copy/paste, selection, etc. This is intended to be a UI framework-independent abstraction. The methods this layer exposes can be called the same from both the WinUI TermControl and the WPF control. * `TermControl`: This is the UWP control. It's got a Core and Interactivity inside it, which it uses for the actual logic of the terminal itself. TermControl's main responsibility is now By splitting into smaller pieces, it will enable us to * write unit tests for the `Core` and `Interactivity` bits, which we desparately need * Combine `ControlCore` and `ControlInteractivity` in an out-of-proc core process in the future, to enable tab tearout. However, we're not doing that work quite yet. There's still lots of work to be done to enable that, thought this is likely the biggest portion. Ideally, this would just be methods moved wholesale from one file to another. Unfortunately, there are a bunch of cases where that didn't work as well as expected. Especially when trying to better enforce the boundary between the classes. We've got a couple tests here that I've added. These are partially examples, and partially things I ran into while implementing this. A bunch of things from #7001 can go in now that we have this. This PR is gonna be a huge pain to review - 38 files with 3,730 additions and 1,661 deletions is nothing to scoff at. It will also conflict 100% with anything that's targeting `TermControl`. I'm hoping we can review this over the course of the next week and just be done with it, and leave plenty of runway for 1.9 bugs in post. ## References * In pursuit of #1256 * Proc Model: #5000 * https://github.com/microsoft/terminal/projects/5 ## PR Checklist * [x] Closes #6842 * [x] Closes https://github.com/microsoft/terminal/projects/5#card-50760249 * [x] Closes https://github.com/microsoft/terminal/projects/5#card-50760258 * [x] I work here * [x] Tests added/passed * [n/a] Requires documentation to be updated ## Detailed Description of the Pull Request / Additional comments * I don't love the names `ControlCore` and `ControlInteractivity`. Open to other names. * I added a `ICoreState` interface for "properties that come from the `ControlCore`, but consumers of the `TermControl` need to know". In the future, these will all need to be handled specially, because they might involve an RPC call to retrieve the info from the core (or cache it) in the window process. * I've added more `EventArgs` to make more events proper `TypedEvent`s. * I've changed how the TerminalApp layer requests updated TaskbarProgress state. It doesn't need to pump TermControl to raise a new event anymore. * ~~Something that snuck into this branch in the very long history is the switch to `DCompositionCreateSurfaceHandle` for the `DxEngine`. @miniksa wrote this originally in 30b8335, I'm just finally committing it here. We'll need that in the future for the out-of-proc stuff.~~ * I reverted this in c113b65d9. We can revert _that_ commit when we want to come back to it. * I've changed the acrylic handler a decent amount. But added tests! * All the `ThrottledFunc` things are left in `TermControl`. Some might be able to move down into core/interactivity, but once we figure out how to use a different kind of Dispatcher (because a UI thread won't necessarily exist for those components). * I've undoubtably messed up the merging of the locking around the appearance config stuff recently ## Validation Steps Performed I've got a rolling list in https://github.com/microsoft/terminal/issues/6842#issuecomment-810990460 that I'm updating as I go.
2021-04-27 17:50:45 +02:00
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
- task: 333b11bd-d341-40d9-afcf-b32d5ce6f23b@2
displayName: Restore NuGet packages for extraneous build actions
inputs:
command: restore
feedsToUse: config
configPath: NuGet.config
restoreSolution: build/packages.config
restoreDirectory: '$(Build.SourcesDirectory)\packages'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
# The environment variable VCToolsInstallDir isn't defined on lab machines, so we need to retrieve it ourselves.
- script: |
"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -Latest -requires Microsoft.Component.MSBuild -property InstallationPath > %TEMP%\vsinstalldir.txt
set /p _VSINSTALLDIR15=<%TEMP%\vsinstalldir.txt
del %TEMP%\vsinstalldir.txt
call "%_VSINSTALLDIR15%\Common7\Tools\VsDevCmd.bat"
echo VCToolsInstallDir = %VCToolsInstallDir%
echo ##vso[task.setvariable variable=VCToolsInstallDir]%VCToolsInstallDir%
displayName: 'Retrieve VC tools directory'
- task: CmdLine@1
displayName: 'Display build machine environment variables'
inputs:
filename: 'set'
- task: powershell@2
displayName: 'Restore PGO database'
condition: eq(variables['PGOBuildMode'], 'Optimize')
inputs:
targetType: filePath
workingDirectory: $(Build.SourcesDirectory)\tools\PGODatabase
filePath: $(Build.SourcesDirectory)\tools\PGODatabase\restore-pgodb.ps1
- task: VSBuild@1
displayName: 'Build solution **\OpenConsole.sln'
inputs:
solution: '**\OpenConsole.sln'
vsVersion: 16.0
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
msbuildArgs: "${{ parameters.additionalBuildArguments }}"
clean: true
maximumCpuCount: true
- task: PowerShell@2
displayName: 'Check MSIX for common regressions'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
# PGO runtime needs its own CRT and it's in the package for convenience.
# That will make this script mad so skip since we're not shipping the PGO Instrumentation one anyway.
condition: ne(variables['PGOBuildMode'], 'Instrument')
inputs:
targetType: inline
script: |
$Package = Get-ChildItem -Recurse -Filter "CascadiaPackage_*.msix"
.\build\scripts\Test-WindowsTerminalPackage.ps1 -Verbose -Path $Package.FullName
- task: powershell@2
displayName: 'Source Index PDBs'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: ne(variables['PGOBuildMode'], 'Instrument')
inputs:
targetType: filePath
filePath: build\scripts\Index-Pdbs.ps1
arguments: -SearchDir '$(Build.SourcesDirectory)' -SourceRoot '$(Build.SourcesDirectory)' -recursive -Verbose -CommitId $(Build.SourceVersion)
errorActionPreference: silentlyContinue
Move tests to invoke `te.exe` directly instead of using VSTest runner (#4490) Moves the tests from using the `vstest.console.exe` route to just using `te.exe`. PROs: - `te.exe` is significantly faster for running tests because the TAEF/VSTest adapter isn't great. - Running through `te.exe` is closer to what our developers are doing on their dev boxes - `te.exe` is how they run in the Windows gates. - `te.exe` doesn't seem to have the sporadic `0x6` error code thrown during the tests where somehow the console handles get lost - `te.exe` doesn't seem to repro the other intermittent issues that we have been having that are inscrutable. - Fewer processes in the tree (te is running anyway under `vstest.console.exe`, just indirected a lot - The log outputs scroll live with all our logging messages instead of suppressing everything until there's a failure - The log output is actually in the order things are happening versus vstest. CONs: - No more code coverage. - No more test records in the ADO build/test panel. - Tests really won't work inside Visual Studio at all. - The log files are really big now - Testing is not a test task anymore, just another script. Refuting each CON: - We didn't read the code coverage numbers - We didn't look at the ADO test panel results or build-over-build velocities - Tests didn't really work inside Visual Studio anyway unless you did the right incantations under the full moon. - We could tone down the logging if we wanted at either the te.exe execution time (with a switch) or by declaring properties in the tests/classes/modules that are very verbose to not log unless it fails. - I don't think anyone cares how they get run as long as they do.
2020-02-10 20:14:06 +01:00
- task: PowerShell@2
displayName: 'Rationalize build platform'
inputs:
targetType: inline
script: |
$Arch = "$(BuildPlatform)"
If ($Arch -Eq "x86") { $Arch = "Win32" }
Write-Host "##vso[task.setvariable variable=RationalizedBuildPlatform]${Arch}"
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
- task: PowerShell@2
displayName: 'Validate binaries are optimized'
condition: eq(variables['pgoBuildMode'], 'Optimize')
inputs:
targetType: inline
script: |
$Binaries = 'OpenConsole.exe', 'WindowsTerminal.exe', 'TerminalApp.dll', 'TerminalConnection.dll', 'Microsoft.Terminal.Control.dll', 'Microsoft.Terminal.Remoting.dll', 'Microsoft.Terminal.Settings.Editor.dll', 'Microsoft.Terminal.Settings.Model.dll'
foreach ($BinFile in $Binaries)
{
& "$(Build.SourcesDirectory)\tools\PGODatabase\verify-pgo.ps1" "$(Build.SourcesDirectory)/bin/$(RationalizedBuildPlatform)/$(BuildConfiguration)/$BinFile"
}
Move tests to invoke `te.exe` directly instead of using VSTest runner (#4490) Moves the tests from using the `vstest.console.exe` route to just using `te.exe`. PROs: - `te.exe` is significantly faster for running tests because the TAEF/VSTest adapter isn't great. - Running through `te.exe` is closer to what our developers are doing on their dev boxes - `te.exe` is how they run in the Windows gates. - `te.exe` doesn't seem to have the sporadic `0x6` error code thrown during the tests where somehow the console handles get lost - `te.exe` doesn't seem to repro the other intermittent issues that we have been having that are inscrutable. - Fewer processes in the tree (te is running anyway under `vstest.console.exe`, just indirected a lot - The log outputs scroll live with all our logging messages instead of suppressing everything until there's a failure - The log output is actually in the order things are happening versus vstest. CONs: - No more code coverage. - No more test records in the ADO build/test panel. - Tests really won't work inside Visual Studio at all. - The log files are really big now - Testing is not a test task anymore, just another script. Refuting each CON: - We didn't read the code coverage numbers - We didn't look at the ADO test panel results or build-over-build velocities - Tests didn't really work inside Visual Studio anyway unless you did the right incantations under the full moon. - We could tone down the logging if we wanted at either the te.exe execution time (with a switch) or by declaring properties in the tests/classes/modules that are very verbose to not log unless it fails. - I don't think anyone cares how they get run as long as they do.
2020-02-10 20:14:06 +01:00
- task: PowerShell@2
displayName: 'Run Unit Tests'
inputs:
Move tests to invoke `te.exe` directly instead of using VSTest runner (#4490) Moves the tests from using the `vstest.console.exe` route to just using `te.exe`. PROs: - `te.exe` is significantly faster for running tests because the TAEF/VSTest adapter isn't great. - Running through `te.exe` is closer to what our developers are doing on their dev boxes - `te.exe` is how they run in the Windows gates. - `te.exe` doesn't seem to have the sporadic `0x6` error code thrown during the tests where somehow the console handles get lost - `te.exe` doesn't seem to repro the other intermittent issues that we have been having that are inscrutable. - Fewer processes in the tree (te is running anyway under `vstest.console.exe`, just indirected a lot - The log outputs scroll live with all our logging messages instead of suppressing everything until there's a failure - The log output is actually in the order things are happening versus vstest. CONs: - No more code coverage. - No more test records in the ADO build/test panel. - Tests really won't work inside Visual Studio at all. - The log files are really big now - Testing is not a test task anymore, just another script. Refuting each CON: - We didn't read the code coverage numbers - We didn't look at the ADO test panel results or build-over-build velocities - Tests didn't really work inside Visual Studio anyway unless you did the right incantations under the full moon. - We could tone down the logging if we wanted at either the te.exe execution time (with a switch) or by declaring properties in the tests/classes/modules that are very verbose to not log unless it fails. - I don't think anyone cares how they get run as long as they do.
2020-02-10 20:14:06 +01:00
targetType: filePath
filePath: build\scripts\Run-Tests.ps1
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
arguments: -MatchPattern '*unit.test*.dll' -Platform '$(RationalizedBuildPlatform)' -Configuration '$(BuildConfiguration)' -LogPath '${{ parameters.testLogPath }}'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: and(and(succeeded(), ne(variables['PGOBuildMode'], 'Instrument')), or(eq(variables['BuildPlatform'], 'x64'), eq(variables['BuildPlatform'], 'x86')))
Move tests to invoke `te.exe` directly instead of using VSTest runner (#4490) Moves the tests from using the `vstest.console.exe` route to just using `te.exe`. PROs: - `te.exe` is significantly faster for running tests because the TAEF/VSTest adapter isn't great. - Running through `te.exe` is closer to what our developers are doing on their dev boxes - `te.exe` is how they run in the Windows gates. - `te.exe` doesn't seem to have the sporadic `0x6` error code thrown during the tests where somehow the console handles get lost - `te.exe` doesn't seem to repro the other intermittent issues that we have been having that are inscrutable. - Fewer processes in the tree (te is running anyway under `vstest.console.exe`, just indirected a lot - The log outputs scroll live with all our logging messages instead of suppressing everything until there's a failure - The log output is actually in the order things are happening versus vstest. CONs: - No more code coverage. - No more test records in the ADO build/test panel. - Tests really won't work inside Visual Studio at all. - The log files are really big now - Testing is not a test task anymore, just another script. Refuting each CON: - We didn't read the code coverage numbers - We didn't look at the ADO test panel results or build-over-build velocities - Tests didn't really work inside Visual Studio anyway unless you did the right incantations under the full moon. - We could tone down the logging if we wanted at either the te.exe execution time (with a switch) or by declaring properties in the tests/classes/modules that are very verbose to not log unless it fails. - I don't think anyone cares how they get run as long as they do.
2020-02-10 20:14:06 +01:00
- task: PowerShell@2
displayName: 'Run Feature Tests (x64 only)'
inputs:
Move tests to invoke `te.exe` directly instead of using VSTest runner (#4490) Moves the tests from using the `vstest.console.exe` route to just using `te.exe`. PROs: - `te.exe` is significantly faster for running tests because the TAEF/VSTest adapter isn't great. - Running through `te.exe` is closer to what our developers are doing on their dev boxes - `te.exe` is how they run in the Windows gates. - `te.exe` doesn't seem to have the sporadic `0x6` error code thrown during the tests where somehow the console handles get lost - `te.exe` doesn't seem to repro the other intermittent issues that we have been having that are inscrutable. - Fewer processes in the tree (te is running anyway under `vstest.console.exe`, just indirected a lot - The log outputs scroll live with all our logging messages instead of suppressing everything until there's a failure - The log output is actually in the order things are happening versus vstest. CONs: - No more code coverage. - No more test records in the ADO build/test panel. - Tests really won't work inside Visual Studio at all. - The log files are really big now - Testing is not a test task anymore, just another script. Refuting each CON: - We didn't read the code coverage numbers - We didn't look at the ADO test panel results or build-over-build velocities - Tests didn't really work inside Visual Studio anyway unless you did the right incantations under the full moon. - We could tone down the logging if we wanted at either the te.exe execution time (with a switch) or by declaring properties in the tests/classes/modules that are very verbose to not log unless it fails. - I don't think anyone cares how they get run as long as they do.
2020-02-10 20:14:06 +01:00
targetType: filePath
filePath: build\scripts\Run-Tests.ps1
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
arguments: -MatchPattern '*feature.test*.dll' -Platform '$(RationalizedBuildPlatform)' -Configuration '$(BuildConfiguration)' -LogPath '${{ parameters.testLogPath }}'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: and(and(succeeded(), ne(variables['PGOBuildMode'], 'Instrument')), eq(variables['BuildPlatform'], 'x64'))
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
- task: PowerShell@2
displayName: 'Convert Test Logs from WTL to xUnit format'
inputs:
targetType: filePath
filePath: build\Helix\ConvertWttLogToXUnit.ps1
arguments: -WttInputPath '${{ parameters.testLogPath }}' -WttSingleRerunInputPath 'unused.wtl' -WttMultipleRerunInputPath 'unused2.wtl' -XUnitOutputPath 'onBuildMachineResults.xml' -TestNamePrefix '$(BuildConfiguration).$(BuildPlatform)'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: and(ne(variables['PGOBuildMode'], 'Instrument'),or(eq(variables['BuildPlatform'], 'x64'), eq(variables['BuildPlatform'], 'x86')))
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
- task: PublishTestResults@2
displayName: 'Upload converted test logs'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: ne(variables['PGOBuildMode'], 'Instrument')
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
inputs:
testResultsFormat: 'xUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
Split `TermControl` into a Core, Interactivity, and Control layer (#9820) ## Summary of the Pull Request Brace yourselves, it's finally here. This PR does the dirty work of splitting the monolithic `TermControl` into three components. These components are: * `ControlCore`: This encapsulates the `Terminal` instance, the `DxEngine` and `Renderer`, and the `Connection`. This is intended to everything that someone might need to stand up a terminal instance in a control, but without any regard for how the UX works. * `ControlInteractivity`: This is a wrapper for the `ControlCore`, which holds the logic for things like double-click, right click copy/paste, selection, etc. This is intended to be a UI framework-independent abstraction. The methods this layer exposes can be called the same from both the WinUI TermControl and the WPF control. * `TermControl`: This is the UWP control. It's got a Core and Interactivity inside it, which it uses for the actual logic of the terminal itself. TermControl's main responsibility is now By splitting into smaller pieces, it will enable us to * write unit tests for the `Core` and `Interactivity` bits, which we desparately need * Combine `ControlCore` and `ControlInteractivity` in an out-of-proc core process in the future, to enable tab tearout. However, we're not doing that work quite yet. There's still lots of work to be done to enable that, thought this is likely the biggest portion. Ideally, this would just be methods moved wholesale from one file to another. Unfortunately, there are a bunch of cases where that didn't work as well as expected. Especially when trying to better enforce the boundary between the classes. We've got a couple tests here that I've added. These are partially examples, and partially things I ran into while implementing this. A bunch of things from #7001 can go in now that we have this. This PR is gonna be a huge pain to review - 38 files with 3,730 additions and 1,661 deletions is nothing to scoff at. It will also conflict 100% with anything that's targeting `TermControl`. I'm hoping we can review this over the course of the next week and just be done with it, and leave plenty of runway for 1.9 bugs in post. ## References * In pursuit of #1256 * Proc Model: #5000 * https://github.com/microsoft/terminal/projects/5 ## PR Checklist * [x] Closes #6842 * [x] Closes https://github.com/microsoft/terminal/projects/5#card-50760249 * [x] Closes https://github.com/microsoft/terminal/projects/5#card-50760258 * [x] I work here * [x] Tests added/passed * [n/a] Requires documentation to be updated ## Detailed Description of the Pull Request / Additional comments * I don't love the names `ControlCore` and `ControlInteractivity`. Open to other names. * I added a `ICoreState` interface for "properties that come from the `ControlCore`, but consumers of the `TermControl` need to know". In the future, these will all need to be handled specially, because they might involve an RPC call to retrieve the info from the core (or cache it) in the window process. * I've added more `EventArgs` to make more events proper `TypedEvent`s. * I've changed how the TerminalApp layer requests updated TaskbarProgress state. It doesn't need to pump TermControl to raise a new event anymore. * ~~Something that snuck into this branch in the very long history is the switch to `DCompositionCreateSurfaceHandle` for the `DxEngine`. @miniksa wrote this originally in 30b8335, I'm just finally committing it here. We'll need that in the future for the out-of-proc stuff.~~ * I reverted this in c113b65d9. We can revert _that_ commit when we want to come back to it. * I've changed the acrylic handler a decent amount. But added tests! * All the `ThrottledFunc` things are left in `TermControl`. Some might be able to move down into core/interactivity, but once we figure out how to use a different kind of Dispatcher (because a UI thread won't necessarily exist for those components). * I've undoubtably messed up the merging of the locking around the appearance config stuff recently ## Validation Steps Performed I've got a rolling list in https://github.com/microsoft/terminal/issues/6842#issuecomment-810990460 that I'm updating as I go.
2021-04-27 17:50:45 +02:00
testResultsFiles: '**/onBuildMachineResults.xml'
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
#searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
#mergeTestResults: false # Optional
#failTaskOnFailedTests: false # Optional
testRunTitle: 'On Build Machine Tests' # Optional
buildPlatform: $(BuildPlatform) # Optional
buildConfiguration: $(BuildConfiguration) # Optional
#publishRunAttachments: true # Optional
- task: CopyFiles@2
displayName: 'Copy result logs to Artifacts'
inputs:
Contents: |
**/*.wtl
**/*onBuildMachineResults.xml
${{ parameters.testLogPath }}
TargetFolder: '$(Build.ArtifactStagingDirectory)/$(BuildConfiguration)/$(BuildPlatform)/test'
OverWrite: true
flattenFolders: true
- task: CopyFiles@2
displayName: 'Copy *.appx/*.msix to Artifacts (Non-PR builds only)'
inputs:
Contents: |
**/*.appx
**/*.msix
**/*.appxsym
!**/Microsoft.VCLibs*.appx
TargetFolder: '$(Build.ArtifactStagingDirectory)/appx'
OverWrite: true
flattenFolders: true
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: succeeded()
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
- task: CopyFiles@2
displayName: 'Copy outputs needed for test runs to Artifacts'
inputs:
Contents: |
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
$(Build.SourcesDirectory)/bin/$(RationalizedBuildPlatform)/$(BuildConfiguration)/*.exe
$(Build.SourcesDirectory)/bin/$(RationalizedBuildPlatform)/$(BuildConfiguration)/*.dll
$(Build.SourcesDirectory)/bin/$(RationalizedBuildPlatform)/$(BuildConfiguration)/*.xml
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
**/Microsoft.VCLibs.*.appx
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
**/TestHostApp/*.exe
**/TestHostApp/*.dll
**/TestHostApp/*.xml
!**/*.pdb
!**/*.ipdb
!**/*.obj
!**/*.pch
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
TargetFolder: '$(Build.ArtifactStagingDirectory)/$(BuildConfiguration)/$(BuildPlatform)/test'
OverWrite: true
flattenFolders: true
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
condition: succeeded()
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
- task: PublishBuildArtifacts@1
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
displayName: 'Publish All Build Artifacts'
inputs:
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
Implement PGO in pipelines for AMD64 architecture; supply training test scenarios (#10071) Implement PGO in pipelines for AMD64 architecture; supply training test scenarios ## References - #3075 - Relevant to speed interests there and other linked issues. ## PR Checklist * [x] Closes #6963 * [x] I work here. * [x] New UIA Tests added and passed. Manual build runs also tested. ## Detailed Description of the Pull Request / Additional comments - Creates a new pipeline run for creating instrumented binaries for Profile Guided Optimization (PGO). - Creates a new suite of UIA tests on the full Windows Terminal app to run PGO training scenarios on instrumented binaries (and incidentally can be used to write other UIA tests later for the full Terminal app.) - Creates a new NuGet artifact to store trained PGO databases (PGD files) at `Microsoft.Internal.Windows.Terminal.PGODatabase` - Creates a new NuGet artifact to supply large-scale test content for automated tests at `Microsoft.Internal.Windows.Terminal.TestContent` - Adjusts the release pipeline to run binaries in PGO optimized mode where content from PGO databases is leveraged at link time to optimize the final release build The following binaries are trained: - OpenConsole.exe - WindowsTerminal.exe - TerminalApp.dll - TerminalConnection.dll - Microsoft.Terminal.Control.dll - Microsoft.Terminal.Remoting.dll - Microsoft.Terminal.Settings.Editor.dll - Microsoft.Terminal.Settings.Model.dll In the future, adding `<PgoTarget>true</PgoTarget>` to a new `vcxproj` file will automatically enroll the DLL/EXE for PGO instrumentation and optimization going forward. Two training test scenarios are implemented: - Smoke test the Terminal by just opening it and typing a bit of text then exiting. (Should help focus on the standard launch path.) - Optimize bulk text output by launching terminal, outputting `big.txt`, then exiting. Additional scenarios can be contributed to the `WindowsTerminal_UIATests` project with the `[TestProperty("IsPGO", "true")]` annotation to add them to the suite of scenarios for PGO. **NOTE:** There are currently no weights applied to the various test scenarios. We will revisit that in the future when/if necessary. ## Validation Steps Performed - [x] - Training run completed at https://dev.azure.com/ms/terminal/_build?definitionId=492&_a=summary - [x] - Optimization run completed locally (by forcing `PGOBuildMode` to `Optimize` on my local machine, manually retrieving the databases with NuGet, and building). - [x] - Validated locally that x86 and ARM64 do not get trained and automatically skip optimization as databases are not present for them. - [x] - Smoke tested optimized binary versus latest releases. `big.txt` output through CMD is ~11-12seconds prior to PGO and just over 8 seconds with PGO.
2021-05-13 23:12:30 +02:00
ArtifactName: 'drop'
- task: CopyFiles@2
displayName: 'Copy PGO databases needed for PGO instrumentation run'
inputs:
Contents: |
**/*.pgd
TargetFolder: '$(Build.ArtifactStagingDirectory)/$(BuildConfiguration)/PGO/$(BuildPlatform)'
OverWrite: true
flattenFolders: true
condition: and(succeeded(), eq(variables['PGOBuildMode'], 'Instrument'))
- task: PublishBuildArtifacts@1
displayName: 'Publish All PGO Artifacts'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(BuildConfiguration)/PGO'
ArtifactName: 'PGO'
condition: and(succeeded(), eq(variables['PGOBuildMode'], 'Instrument'))