terminal/.github/actions/spelling/allow/microsoft.txt

76 lines
633 B
Plaintext
Raw Normal View History

2020-06-05 20:57:17 +02:00
ACLs
ci: update to Spell check to 0.0.17a (#9014) ### Plurals and paste tenses In the past, plurals `foo`+`s` and past tenses `foo`+`ed` were automatically tolerated. This turned out to be a bad design choice on my part. The basic example is that `potatos` would sometimes be treated as a mistake and sometimes not (depending on the presence of `potato`). You can see in this PR, that this logic resulted in `Applys` being accepted as a word along with `AppContainered` -- there's nothing intrinsically wrong w/ the latter, but unfortunately in order to screen out the former, my shortcut just couldn't stick around. This means that the `dictionary`/`expect` files will grow perhaps by a tiny bit, but as you can see, not really by much. This is also why `thereses` (a user) was accepted as a word in the past (therese is in the base dictionary, so `therese` + `s` was acceptable). ### Pull requests When GitHub initially introduced GitHub Actions, the event for `pull_request` was created without enough permission for a tool like this to work properly. I worked around that by using the `schedule` event. In 2020, they introduced a replacement event `pull_request_target` which has enough permission. This means that I can stop relying on the `schedule` event. ### Miscellaneous * I've folded together some `expect/` files since now is as good a time as any. * I've included a hint about `excludes.txt` (I added a similar one for our primary repo recently, and it came up this week in `microsoft/terminal` -- @zadjii-msft) * I've standardized on a default of `.github/actions/spelling` to make the out of the box experience easier for new adopters, so I'm applying that change here -- if you're attached to the old directory name, specifying it is still supported. -- note the directory rename may cause a merge conflict for people with open PRs and changes to the contents, this shouldn't be a big problem.
2021-02-03 20:17:38 +01:00
ADMINS
altform
ci: update to Spell check to 0.0.17a (#9014) ### Plurals and paste tenses In the past, plurals `foo`+`s` and past tenses `foo`+`ed` were automatically tolerated. This turned out to be a bad design choice on my part. The basic example is that `potatos` would sometimes be treated as a mistake and sometimes not (depending on the presence of `potato`). You can see in this PR, that this logic resulted in `Applys` being accepted as a word along with `AppContainered` -- there's nothing intrinsically wrong w/ the latter, but unfortunately in order to screen out the former, my shortcut just couldn't stick around. This means that the `dictionary`/`expect` files will grow perhaps by a tiny bit, but as you can see, not really by much. This is also why `thereses` (a user) was accepted as a word in the past (therese is in the base dictionary, so `therese` + `s` was acceptable). ### Pull requests When GitHub initially introduced GitHub Actions, the event for `pull_request` was created without enough permission for a tool like this to work properly. I worked around that by using the `schedule` event. In 2020, they introduced a replacement event `pull_request_target` which has enough permission. This means that I can stop relying on the `schedule` event. ### Miscellaneous * I've folded together some `expect/` files since now is as good a time as any. * I've included a hint about `excludes.txt` (I added a similar one for our primary repo recently, and it came up this week in `microsoft/terminal` -- @zadjii-msft) * I've standardized on a default of `.github/actions/spelling` to make the out of the box experience easier for new adopters, so I'm applying that change here -- if you're attached to the old directory name, specifying it is still supported. -- note the directory rename may cause a merge conflict for people with open PRs and changes to the contents, this shouldn't be a big problem.
2021-02-03 20:17:38 +01:00
altforms
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
appendwttlogging
ci: spelling: update to v0.0.18 (#10035) Co-authored-by: Josh Soref <jsoref@users.noreply.github.com> <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Upgrade check-spelling to [v0.0.18](https://github.com/check-spelling/check-spelling/releases/tag/v0.0.18) <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [ ] Closes #xxx * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments I've replaced the `dictionary` directory with `allow` and `reject`. When terminal got check-spelling, I didn't have a way to do `allow`/`reject` (but they were added a while ago). With this release, the bot will complain about items that are in user managed files that wouldn't be valid, this is mostly `-`s in dictionary files, but it also includes numbers `0`..`9` and `_`. If a specific token needs to be accepted but not its sub-elements, the item should be added to `patterns.txt` instead (`D2DERR_SHADER_COMPILE_FAILED` is an example). With this version, check-spelling defaults to only considering tokens with at least 3 letters. It's possible to tune it back to 2 (or even 1), but in testing, the 2 character tokens have ended up not being worthwhile. (This can be [adjusted](https://github.com/check-spelling/check-spelling/wiki/Configuration#shortest_word) if it turns out that people manage to misspell two character tokens often enough to justify checking them.) <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed I ran a number of passes of the spell checker in https://github.com/check-spelling/terminal/actions (note: I tend to delete this repository, so this link may be dead at some point, and action run logs expire).
2021-05-14 15:28:37 +02:00
appx
appxbundle
appxerror
appxmanifest
ATL
backplating
ci: update to Spell check to 0.0.17a (#9014) ### Plurals and paste tenses In the past, plurals `foo`+`s` and past tenses `foo`+`ed` were automatically tolerated. This turned out to be a bad design choice on my part. The basic example is that `potatos` would sometimes be treated as a mistake and sometimes not (depending on the presence of `potato`). You can see in this PR, that this logic resulted in `Applys` being accepted as a word along with `AppContainered` -- there's nothing intrinsically wrong w/ the latter, but unfortunately in order to screen out the former, my shortcut just couldn't stick around. This means that the `dictionary`/`expect` files will grow perhaps by a tiny bit, but as you can see, not really by much. This is also why `thereses` (a user) was accepted as a word in the past (therese is in the base dictionary, so `therese` + `s` was acceptable). ### Pull requests When GitHub initially introduced GitHub Actions, the event for `pull_request` was created without enough permission for a tool like this to work properly. I worked around that by using the `schedule` event. In 2020, they introduced a replacement event `pull_request_target` which has enough permission. This means that I can stop relying on the `schedule` event. ### Miscellaneous * I've folded together some `expect/` files since now is as good a time as any. * I've included a hint about `excludes.txt` (I added a similar one for our primary repo recently, and it came up this week in `microsoft/terminal` -- @zadjii-msft) * I've standardized on a default of `.github/actions/spelling` to make the out of the box experience easier for new adopters, so I'm applying that change here -- if you're attached to the old directory name, specifying it is still supported. -- note the directory rename may cause a merge conflict for people with open PRs and changes to the contents, this shouldn't be a big problem.
2021-02-03 20:17:38 +01:00
bitmaps
BOMs
CPLs
cpptools
cppvsdbg
ci: spelling: update to v0.0.18 (#10035) Co-authored-by: Josh Soref <jsoref@users.noreply.github.com> <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Upgrade check-spelling to [v0.0.18](https://github.com/check-spelling/check-spelling/releases/tag/v0.0.18) <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [ ] Closes #xxx * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments I've replaced the `dictionary` directory with `allow` and `reject`. When terminal got check-spelling, I didn't have a way to do `allow`/`reject` (but they were added a while ago). With this release, the bot will complain about items that are in user managed files that wouldn't be valid, this is mostly `-`s in dictionary files, but it also includes numbers `0`..`9` and `_`. If a specific token needs to be accepted but not its sub-elements, the item should be added to `patterns.txt` instead (`D2DERR_SHADER_COMPILE_FAILED` is an example). With this version, check-spelling defaults to only considering tokens with at least 3 letters. It's possible to tune it back to 2 (or even 1), but in testing, the 2 character tokens have ended up not being worthwhile. (This can be [adjusted](https://github.com/check-spelling/check-spelling/wiki/Configuration#shortest_word) if it turns out that people manage to misspell two character tokens often enough to justify checking them.) <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed I ran a number of passes of the spell checker in https://github.com/check-spelling/terminal/actions (note: I tend to delete this repository, so this link may be dead at some point, and action run logs expire).
2021-05-14 15:28:37 +02:00
CPRs
2020-06-05 21:03:20 +02:00
DACL
2020-06-05 20:57:17 +02:00
DACLs
ci: update to Spell check to 0.0.17a (#9014) ### Plurals and paste tenses In the past, plurals `foo`+`s` and past tenses `foo`+`ed` were automatically tolerated. This turned out to be a bad design choice on my part. The basic example is that `potatos` would sometimes be treated as a mistake and sometimes not (depending on the presence of `potato`). You can see in this PR, that this logic resulted in `Applys` being accepted as a word along with `AppContainered` -- there's nothing intrinsically wrong w/ the latter, but unfortunately in order to screen out the former, my shortcut just couldn't stick around. This means that the `dictionary`/`expect` files will grow perhaps by a tiny bit, but as you can see, not really by much. This is also why `thereses` (a user) was accepted as a word in the past (therese is in the base dictionary, so `therese` + `s` was acceptable). ### Pull requests When GitHub initially introduced GitHub Actions, the event for `pull_request` was created without enough permission for a tool like this to work properly. I worked around that by using the `schedule` event. In 2020, they introduced a replacement event `pull_request_target` which has enough permission. This means that I can stop relying on the `schedule` event. ### Miscellaneous * I've folded together some `expect/` files since now is as good a time as any. * I've included a hint about `excludes.txt` (I added a similar one for our primary repo recently, and it came up this week in `microsoft/terminal` -- @zadjii-msft) * I've standardized on a default of `.github/actions/spelling` to make the out of the box experience easier for new adopters, so I'm applying that change here -- if you're attached to the old directory name, specifying it is still supported. -- note the directory rename may cause a merge conflict for people with open PRs and changes to the contents, this shouldn't be a big problem.
2021-02-03 20:17:38 +01:00
diffs
disposables
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
dotnetfeed
ci: update to Spell check to 0.0.17a (#9014) ### Plurals and paste tenses In the past, plurals `foo`+`s` and past tenses `foo`+`ed` were automatically tolerated. This turned out to be a bad design choice on my part. The basic example is that `potatos` would sometimes be treated as a mistake and sometimes not (depending on the presence of `potato`). You can see in this PR, that this logic resulted in `Applys` being accepted as a word along with `AppContainered` -- there's nothing intrinsically wrong w/ the latter, but unfortunately in order to screen out the former, my shortcut just couldn't stick around. This means that the `dictionary`/`expect` files will grow perhaps by a tiny bit, but as you can see, not really by much. This is also why `thereses` (a user) was accepted as a word in the past (therese is in the base dictionary, so `therese` + `s` was acceptable). ### Pull requests When GitHub initially introduced GitHub Actions, the event for `pull_request` was created without enough permission for a tool like this to work properly. I worked around that by using the `schedule` event. In 2020, they introduced a replacement event `pull_request_target` which has enough permission. This means that I can stop relying on the `schedule` event. ### Miscellaneous * I've folded together some `expect/` files since now is as good a time as any. * I've included a hint about `excludes.txt` (I added a similar one for our primary repo recently, and it came up this week in `microsoft/terminal` -- @zadjii-msft) * I've standardized on a default of `.github/actions/spelling` to make the out of the box experience easier for new adopters, so I'm applying that change here -- if you're attached to the old directory name, specifying it is still supported. -- note the directory rename may cause a merge conflict for people with open PRs and changes to the contents, this shouldn't be a big problem.
2021-02-03 20:17:38 +01:00
DTDs
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
DWINRT
enablewttlogging
Intelli
Show only latest VS, VC prompts by default (#11326) ## Summary of the Pull Request Similar to `vswhere -latest`, show only the latest Visual Studio command prompts / developer PowerShell. This was tested by deleting the local package state and testing against fresh state with both VS2019 and VS2022 Preview installed, and indeed VS2022 Preview (both cmd and powershell) show. The other profiles were generated but hidden by default. ## References Modification of PR #7774 ## PR Checklist * [x] Closes #11307 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [x] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx ## Detailed Description of the Pull Request / Additional comments The sort algorithm is the same basic algorithm I used in https://github.com/microsoft/vswhere. It sorts first by installation version with a secondary sort based on the install date in case the installation versions are the same. ## Validation Steps Performed With both VS2019 and VS2022 Preview installed, I made sure the initial state was expected, and tried different combinations of hiding and unhiding generated entries, and restarted Terminal to make sure my settings "stuck".
2021-09-30 00:03:05 +02:00
IVisual
2020-04-22 01:42:43 +02:00
LKG
Persist window layout cont. save multiple windows (#11083) <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Continuation of https://github.com/microsoft/terminal/pull/10972 to handle multiple windows, requires that to be merged first. <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [x] Also closes #766 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [x] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments Rough changelog: Normally saving is triggered to occur every 30s, or sooner if a window is created/closed. The existing behavior of saving on last close is maintained to bypass that throttling. The automatic saving allows for crash recovery. Additionally all window layouts will be saved upon taking the `quit` action. For loading we will check if we are the first window, that there are any saved layouts, and if the setting is enabled, and then depending on if we were given command line args or startup actions. - create a new window for each saved layout, or - take the first layout for our self and then a new window for each other layout. This also saves the layout when the quit action is taken. Misc changes - A -s,--saved argument was added to the command line to facilitate opening all of the windows with the right settings. This also means that while a terminal session is running you can do wt -s idx to open a copy of window idx. There isn't a stable ordering of which idx each window gets saved as (it is whatever the iteration order of _peasants is), so it is just a cute hack for now. - All position calculation has been moved up to AppHost this does mean we need to awkwardly pass around positions in a couple of unexpected places, but no solution was perfect. - Renamed "Open tabs from a previous session" to "Open windows from a previous session". (not reflected in video below) - Now save runtime tab color and window names - Only enabled for non-elevated windows - Add some change tracking to ApplicationState <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed ![output](https://user-images.githubusercontent.com/6185249/131163473-d649d204-a589-41ad-b9d9-c4c0528cb684.gif)
2021-09-27 23:18:39 +02:00
LOCKFILE
Lxss
mfcribbon
microsoft
microsoftonline
MSAA
msixbundle
MSVC
muxc
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
netcore
osgvsowi
PFILETIME
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
pgc
pgo
pgosweep
Add a context menu entry to "Open Windows Terminal here" (#6100) ## Summary of the Pull Request ![image](https://user-images.githubusercontent.com/18356694/82586680-94447680-9b5d-11ea-9cf1-a85d2b32db10.png) I went with the simple option - just open the Terminal with the default profile in the selected directory. I'd love to add another entry for "Open Terminal here with Profile...", but that's going to be follow-up work, once we sort out pulling the Terminal Settings into their own dll. ## References * I'm going to need to file a bunch of follow-ups on this one. - We should add another entry to let the user select which profile - We should add the icon - I've got to do it in `dllname.dll,1` format, which is annoying. - These strings should be localized. - Should this only appear on <kbd>Shift</kbd>+right click? Probably! However, I don't know how to do that. * [A Win7 Explorer Command Sample](https://github.com/microsoft/Windows-classic-samples/tree/master/Samples/Win7Samples/winui/shell/appshellintegration/ExplorerCommandVerb) which hasn't aged well * [cppwinrt tutorial](https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/author-coclasses) on using COM in cppwinrt * [This is PowerToys' manifest](https://github.com/microsoft/PowerToys/blob/d2a60c7287eb5667b5282a519c92b759664c9e30/installer/MSIX/appxmanifest.xml#L53-L65) and then [their implementation](https://github.com/microsoft/PowerToys/blob/d16ebba9e0f06e7a0d41d981aeb1fd0a78192dc0/src/modules/powerrename/dll/PowerRenameExt.cpp) which were both helpful * [This ](https://docs.microsoft.com/en-us/windows/apps/desktop/modernize/desktop-to-uwp-extensions#instructions) was the sample I followed for how to actually set up the manifest, with the added magic that [`desktop5` lets you specify "Directory"](https://docs.microsoft.com/en-us/uwp/schemas/appxpackage/uapmanifestschema/element-desktop5-itemtype) ## PR Checklist * [x] Closes #1060 * [x] I work here * [ ] Tests added/passed * [n/a] Requires documentation to be updated ## Detailed Description of the Pull Request / Additional comments This adds a COM class that implements `IExplorerCommand`, which is what lets us populate the context menu entry. We expose that type through a new DLL that is simply responsible for the shell extension, so that explorer doesn't need to load the entire Terminal just to populate that entry. The COM class is tied to the application through some new entries in the manifest. The Clsid values are IMPORTANT - they must match the UUID of the implementation type. However, the `Verb` in the manifest didn't seem important.
2020-05-28 17:42:13 +02:00
powerrename
powershell
propkey
pscustomobject
QWORD
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
robocopy
2020-06-05 20:57:17 +02:00
SACLs
sdkddkver
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
Shobjidl
Skype
SRW
ci: spelling: update to v0.0.18 (#10035) Co-authored-by: Josh Soref <jsoref@users.noreply.github.com> <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Upgrade check-spelling to [v0.0.18](https://github.com/check-spelling/check-spelling/releases/tag/v0.0.18) <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [ ] Closes #xxx * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments I've replaced the `dictionary` directory with `allow` and `reject`. When terminal got check-spelling, I didn't have a way to do `allow`/`reject` (but they were added a while ago). With this release, the bot will complain about items that are in user managed files that wouldn't be valid, this is mostly `-`s in dictionary files, but it also includes numbers `0`..`9` and `_`. If a specific token needs to be accepted but not its sub-elements, the item should be added to `patterns.txt` instead (`D2DERR_SHADER_COMPILE_FAILED` is an example). With this version, check-spelling defaults to only considering tokens with at least 3 letters. It's possible to tune it back to 2 (or even 1), but in testing, the 2 character tokens have ended up not being worthwhile. (This can be [adjusted](https://github.com/check-spelling/check-spelling/wiki/Configuration#shortest_word) if it turns out that people manage to misspell two character tokens often enough to justify checking them.) <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed I ran a number of passes of the spell checker in https://github.com/check-spelling/terminal/actions (note: I tend to delete this repository, so this link may be dead at some point, and action run logs expire).
2021-05-14 15:28:37 +02:00
sxs
Sysinternals
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
sysnative
systemroot
taskkill
tasklist
tdbuildteamid
unvirtualized
Fixed #5205: Ctrl+Alt+2 doesn't send ^[^@ (#5272) ## Summary of the Pull Request Fixes #5205, by replacing another use of `MapVirtualKeyW` with `ToUnicodeEx`. The latter just seems to be much more consistent at translating key combinations in general. In this particular case though it fixes the issue, because there's no differentiation in `MapVirtualKeyW` for whether it failed to return a character (`'\0'`) or succeeded in turning `^@` into `'\0'`. `ToUnicodeEx` on the other hand returns the success state separately from the translated character. <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [x] Closes #5205 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [ ] Requires documentation to be updated * [x] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #5205 ## Detailed Description of the Pull Request / Additional comments This PR changes the behavior of the `Ctrl+Alt+Key` handling slightly: ⚠️ `ToUnicodeEx` returns unshifted characters. ⚠️ For instance `Ctrl+Alt+a` is now turned into `^[^a`. Due to how ASCII works this is essentially the same though because `'A' & 0b11111` and `'a' & 0b11111` are the same. ## Validation Steps Performed * Run `showkey -a` * Ensured `Ctrl+Alt+Space` as well as `Ctrl+Alt+Shift+2` are turned into `^[^@` * Ensured other, random `Ctrl+Alt+Key` combination behave identical to the current master
2021-02-08 16:33:38 +01:00
VCRT
vcruntime
Virtualization
visualstudio
vscode
VSTHRD
winsdkver
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
wlk
wslpath
Helix Testing (#6992) Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests. ## References #### Creates the following new issues: - #7281 - re-enable local tests that were disabled to turn on Helix - #7282 - re-enable UIA tests that were disabled to turn on Helix - #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts #### Consumes from: - #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on. #### Produces for: - #671 - Making Terminal UIA tests is now possible - #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO. #### Related: - #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back! ## PR Checklist * [x] Closes #3838 * [x] I work here. * [x] Literally adds tests. * [ ] Should I update a testing doc in this repo? * [x] Am core contributor. Hear me roar. * [ ] Correct spell-checking the right way before merge. ## Detailed Description of the Pull Request / Additional comments We have had two classes of tests that don't work in our usual build-machine testing environment: 1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user) 2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.) The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml. This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates Some of the modifications in the files include (but are not limited to) reasons like: - Our test binaries are named differently than MUX's test binaries - We don't need certain types of testing that MUX does. - We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#) - Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions). The build now runs in a few stages: 1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.) 2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool. 3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.) 4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities. We are set to run Helix tests on the Feature test policy of only x64 for now. Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per). In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly. ## Validation Steps Performed - [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI - [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
2020-08-18 20:23:24 +02:00
wtl
wtt
wttlog
Fixed #5205: Ctrl+Alt+2 doesn't send ^[^@ (#5272) ## Summary of the Pull Request Fixes #5205, by replacing another use of `MapVirtualKeyW` with `ToUnicodeEx`. The latter just seems to be much more consistent at translating key combinations in general. In this particular case though it fixes the issue, because there's no differentiation in `MapVirtualKeyW` for whether it failed to return a character (`'\0'`) or succeeded in turning `^@` into `'\0'`. `ToUnicodeEx` on the other hand returns the success state separately from the translated character. <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [x] Closes #5205 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [ ] Requires documentation to be updated * [x] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #5205 ## Detailed Description of the Pull Request / Additional comments This PR changes the behavior of the `Ctrl+Alt+Key` handling slightly: ⚠️ `ToUnicodeEx` returns unshifted characters. ⚠️ For instance `Ctrl+Alt+a` is now turned into `^[^a`. Due to how ASCII works this is essentially the same though because `'A' & 0b11111` and `'a' & 0b11111` are the same. ## Validation Steps Performed * Run `showkey -a` * Ensured `Ctrl+Alt+Space` as well as `Ctrl+Alt+Shift+2` are turned into `^[^@` * Ensured other, random `Ctrl+Alt+Key` combination behave identical to the current master
2021-02-08 16:33:38 +01:00
Xamarin