terminal/src/renderer/vt/VtSequences.cpp

473 lines
18 KiB
C++
Raw Normal View History

// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
#include "precomp.h"
#include "vtrenderer.hpp"
#include "../../inc/conattrs.hpp"
#pragma hdrstop
using namespace Microsoft::Console::Render;
// Method Description:
// - Formats and writes a sequence to stop the cursor from blinking.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_StopCursorBlinking() noexcept
{
return _Write("\x1b[?12l");
}
// Method Description:
// - Formats and writes a sequence to start the cursor blinking. If it's
// hidden, this won't also show it.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_StartCursorBlinking() noexcept
{
return _Write("\x1b[?12h");
}
// Method Description:
// - Formats and writes a sequence to hide the cursor.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_HideCursor() noexcept
{
return _Write("\x1b[?25l");
}
// Method Description:
// - Formats and writes a sequence to show the cursor.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_ShowCursor() noexcept
{
return _Write("\x1b[?25h");
}
// Method Description:
// - Formats and writes a sequence to erase the remainder of the line starting
// from the cursor position.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_EraseLine() noexcept
{
// The default no-param action of erase line is erase to the right.
// telnet client doesn't understand the parameterized version,
// so emit the implicit sequence instead.
return _Write("\x1b[K");
}
// Method Description:
// - Formats and writes a sequence to either insert or delete a number of lines
// into the buffer at the current cursor location.
// Delete/insert Character removes/adds N characters from/to the buffer, and
// shifts the remaining chars in the row to the left/right, while Erase
// Character replaces N characters with spaces, and leaves the rest
// untouched.
// Arguments:
// - chars: a number of characters to erase (by overwriting with space)
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_EraseCharacter(const short chars) noexcept
{
return _WriteFormatted(FMT_COMPILE("\x1b[{}X"), chars);
}
// Method Description:
// - Moves the cursor forward (right) a number of characters.
// Arguments:
// - chars: a number of characters to move cursor right by.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_CursorForward(const short chars) noexcept
{
return _WriteFormatted(FMT_COMPILE("\x1b[{}C"), chars);
}
// Method Description:
// - Formats and writes a sequence to erase the remainder of the line starting
// from the cursor position.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_ClearScreen() noexcept
{
return _Write("\x1b[2J");
}
Implement a pair of shims for `cls`, `Clear-Host` in conpty mode (#5627) ## Summary of the Pull Request This PR implements a pair of shims for `cmd` and `powershell`, so that their `cls` and `Clear-Host` functions will clear the entire terminal buffer (like they do in conhost), instead of just the viewport. With the conpty viewport and buffer being the same size, there's effectively no way to know if an application is calling these API's in this way with the intention of clearing the buffer or the viewport. We absolutely have to guess. Each of these shims checks to see if the way that the API is being called exactly matches the way `cmd` or `powershell` would call these APIs. If it does, we manually write a `^[[3J` to the connected terminal, to get he Terminal to clear it's own scrollback. ~~_⚠️ If another application were trying to clear the **viewport** with an exactly similar API call, this would also cause the terminal scrollback to get cleared ⚠️_~~ * [x] Should these shims be restricted to when the process that's calling them is actually `cmd.exe` or `powershell.exe`? Can I even do this? I think we've done such a good job of isolating the client process information from the rest of the host code that I can't figure out how to do this. - YES, this can be done, and I did it. * [ ] **TODO**: _While I'm here_, should I have `DoSrvPrivateEraseAll` (the implementation for `^[[2J`, in `getset.cpp`) also manually trigger a EraseAll in the terminal in conpty mode? ## PR Checklist * [x] Closes #3126 * [x] Actually closes #1305 too, which is really the same thing, but probably deserves a callout * [x] I work here * [x] Tests added/passed * [n/a] Requires documentation to be updated ## Validation Steps Performed * ran tests * checked `cls` in the Terminal * checked `Clear-Host` in the Terminal * Checked running `powershell clear-host` from `cmd.exe`
2020-04-30 23:53:31 +02:00
[[nodiscard]] HRESULT VtEngine::_ClearScrollback() noexcept
{
return _Write("\x1b[3J");
}
// Method Description:
// - Formats and writes a sequence to either insert or delete a number of lines
// into the buffer at the current cursor location.
// Arguments:
// - sLines: a number of lines to insert or delete
// - fInsertLine: true iff we should insert the lines, false to delete them.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_InsertDeleteLine(const short sLines, const bool fInsertLine) noexcept
{
if (sLines <= 0)
{
return S_OK;
}
if (sLines == 1)
{
return _Write(fInsertLine ? "\x1b[L" : "\x1b[M");
}
return _WriteFormatted(FMT_COMPILE("\x1b[{}{}"), sLines, fInsertLine ? 'L' : 'M');
}
// Method Description:
// - Formats and writes a sequence to delete a number of lines into the buffer
// at the current cursor location.
// Arguments:
// - sLines: a number of lines to insert
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_DeleteLine(const short sLines) noexcept
{
return _InsertDeleteLine(sLines, false);
}
// Method Description:
// - Formats and writes a sequence to insert a number of lines into the buffer
// at the current cursor location.
// Arguments:
// - sLines: a number of lines to insert
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_InsertLine(const short sLines) noexcept
{
return _InsertDeleteLine(sLines, true);
}
// Method Description:
// - Formats and writes a sequence to move the cursor to the specified
// coordinate position. The input coord should be in console coordinates,
// where origin=(0,0).
// Arguments:
// - coord: Console coordinates to move the cursor to.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_CursorPosition(const COORD coord) noexcept
{
// VT coords start at 1,1
COORD coordVt = coord;
coordVt.X++;
coordVt.Y++;
return _WriteFormatted(FMT_COMPILE("\x1b[{};{}H"), coordVt.Y, coordVt.X);
}
// Method Description:
// - Formats and writes a sequence to move the cursor to the origin.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_CursorHome() noexcept
{
return _Write("\x1b[H");
}
// Method Description:
// - Formats and writes a sequence to change the current text attributes to the default.
// Arguments:
// <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetGraphicsDefault() noexcept
{
return _Write("\x1b[m");
}
// Method Description:
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
// - Formats and writes a sequence to change the current text attributes to an
// indexed color from the 16-color table.
// Arguments:
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
// - index: color table index to emit as a VT sequence
// - fIsForeground: true if we should emit the foreground sequence, false for background
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
[[nodiscard]] HRESULT VtEngine::_SetGraphicsRendition16Color(const BYTE index,
const bool fIsForeground) noexcept
{
// Always check using the foreground flags, because the bg flags constants
// are a higher byte
// Foreground sequences are in [30,37] U [90,97]
// Background sequences are in [40,47] U [100,107]
// The "dark" sequences are in the first 7 values, the bright sequences in the second set.
// Note that text brightness and boldness are different in VT. Boldness is
// handled by _SetGraphicsBoldness. Here, we can emit either bright or
// dark colors. For conhost as a terminal, it can't draw bold
// characters, so it displays "bold" as bright, and in fact most
// terminals display the bright color when displaying bolded text.
// By specifying the boldness and brightness separately, we'll make sure the
// terminal has an accurate representation of our buffer.
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
const auto prefix = WI_IsFlagSet(index, FOREGROUND_INTENSITY) ? (fIsForeground ? 90 : 100) : (fIsForeground ? 30 : 40);
return _WriteFormatted(FMT_COMPILE("\x1b[{}m"), prefix + (index & 7));
}
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// Method Description:
// - Formats and writes a sequence to change the current text attributes to an
// indexed color from the 256-color table.
// Arguments:
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
// - index: color table index to emit as a VT sequence
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - fIsForeground: true if we should emit the foreground sequence, false for background
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
[[nodiscard]] HRESULT VtEngine::_SetGraphicsRendition256Color(const BYTE index,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
const bool fIsForeground) noexcept
{
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
return _WriteFormatted(FMT_COMPILE("\x1b[{}8;5;{}m"), fIsForeground ? '3' : '4', index);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
}
// Method Description:
// - Formats and writes a sequence to change the current text attributes to an
// RGB color.
// Arguments:
// - color: The color to emit a VT sequence for
// - fIsForeground: true if we should emit the foreground sequence, false for background
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetGraphicsRenditionRGBColor(const COLORREF color,
const bool fIsForeground) noexcept
{
const uint8_t r = GetRValue(color);
const uint8_t g = GetGValue(color);
const uint8_t b = GetBValue(color);
return _WriteFormatted(FMT_COMPILE("\x1b[{}8;2;{};{};{}m"), fIsForeground ? '3' : '4', r, g, b);
}
// Method Description:
// - Formats and writes a sequence to change the current text attributes to the
// default foreground or background. Does not affect the boldness of text.
// Arguments:
// - fIsForeground: true if we should emit the foreground sequence, false for background
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetGraphicsRenditionDefaultColor(const bool fIsForeground) noexcept
{
return _Write(fIsForeground ? ("\x1b[39m") : ("\x1b[49m"));
}
// Method Description:
// - Formats and writes a sequence to change the terminal's window size.
// Arguments:
// - sWidth: number of columns the terminal should display
// - sHeight: number of rows the terminal should display
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_ResizeWindow(const short sWidth, const short sHeight) noexcept
{
if (sWidth < 0 || sHeight < 0)
{
return E_INVALIDARG;
}
return _WriteFormatted(FMT_COMPILE("\x1b[8;{};{}t"), sHeight, sWidth);
}
// Method Description:
// - Formats and writes a sequence to request the end terminal to tell us the
// cursor position. The terminal will reply back on the vt input handle.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_RequestCursor() noexcept
{
return _Write("\x1b[6n");
}
// Method Description:
// - Formats and writes a sequence to change the terminal's title string
// Arguments:
// - title: string to use as the new title of the window.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_ChangeTitle(_In_ const std::string& title) noexcept
{
return _WriteFormatted(FMT_COMPILE("\x1b]0;{}\x7"), title);
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the boldness of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isBold: If true, we'll embolden the text. Otherwise we'll debolden the text.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
[[nodiscard]] HRESULT VtEngine::_SetBold(const bool isBold) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isBold ? "\x1b[1m" : "\x1b[22m");
}
Add support for the "faint" graphic rendition attribute (#6873) ## Summary of the Pull Request This PR adds support for the `SGR 2` escape sequence, which enables the ANSI _faint_ graphic rendition attribute. When a character is output with this attribute set, it uses a dimmer version of the active foreground color. ## PR Checklist * [x] Closes #6703 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #6703 ## Detailed Description of the Pull Request / Additional comments There was already an `ExtendedAttributes::Faint` flag in the `TextAttribute` class, but I needed to add `SetFaint` and `IsFaint` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 2` sequence. I also had to update the existing `SGR 22` handler to reset _Faint_ in addition to _Bold_, since they share the same reset sequence. For that reason, I thought it a good idea to change the name of the `SGR 22` enum to `NotBoldOrFaint`. For the purpose of rendering, I've updated the `TextAttribute::CalculateRgbColors` method to return a dimmer version of the foreground color when the _Faint_ attribute is set. This is simply achieved by dividing each color component by two, which produces a reasonable effect without being too complicated. Note that the _Faint_ effect is applied before _Reverse Video_, so if the output it reversed, it's the background that will be faint. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, _Bold_ and _Faint_ share the same reset sequence, so to forward that state over conpty we have to go through a slightly more complicated process than with other attributes. We first check whether either attribute needs to be turned off to send the reset sequence, and then check if the individual attributes need to be turned on again. ## Validation I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `ScreenBufferTests`, and the `VtRendererTest`, and added a test to confirm the color calculations when _Faint_ is set in the `TextAttributeTests`. I've also done a bunch of manual testing with all the different VT color types and confirmed that our output is comparable to most other terminals.
2020-07-13 19:44:09 +02:00
// Method Description:
// - Formats and writes a sequence to change the faintness of the following text.
// Arguments:
// - isFaint: If true, we'll make the text faint. Otherwise we'll remove the faintness.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetFaint(const bool isFaint) noexcept
{
return _Write(isFaint ? "\x1b[2m" : "\x1b[22m");
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the underline of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isUnderlined: If true, we'll underline the text. Otherwise we'll remove the underline.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetUnderlined(const bool isUnderlined) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isUnderlined ? "\x1b[4m" : "\x1b[24m");
}
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
// Method Description:
// - Formats and writes a sequence to change the double underline of the following text.
// Arguments:
// - isUnderlined: If true, we'll doubly underline the text. Otherwise we'll remove the underline.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetDoublyUnderlined(const bool isUnderlined) noexcept
{
return _Write(isUnderlined ? "\x1b[21m" : "\x1b[24m");
}
Add support for the "overline" graphic rendition attribute (#6754) ## Summary of the Pull Request This PR adds support for the `SGR 53` and `SGR 55` escapes sequences, which enable and disable the ANSI _overline_ graphic rendition attribute, the equivalent of the console character attribute `COMMON_LVB_GRID_HORIZONTAL`. When a character is output with this attribute set, a horizontal line is rendered at the top of the character cell. ## PR Checklist * [x] Closes #6000 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. ## Detailed Description of the Pull Request / Additional comments To start with, I added `SetOverline` and `IsOverlined` methods to the `TextAttribute` class, to set and get the legacy `COMMON_LVB_GRID_HORIZONTAL` attribute. Technically there was already an `IsTopHorizontalDisplayed` method, but I thought it more readable to add a separate `IsOverlined` as an alias for that. Then it was just a matter of adding calls to set and reset the attribute in response to the `SGR 53` and `SGR 55` sequences in the `SetGraphicsRendition` methods of the two dispatchers. The actual rendering was already taken care of by the `PaintBufferGridLines` method in the rendering engines. The only other change required was to update the `_UpdateExtendedAttrs` method in the `Xterm256Engine` of the VT renderer, to ensure the attribute state would be forwarded to the Windows Terminal over conpty. ## Validation Steps Performed I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `OutputEngineTest`, and the `VtRendererTest`. I've also manually tested the `SGR 53` and `SGR 55` sequences to confirm that they do actually render (or remove) an overline on the characters being output.
2020-07-06 16:11:17 +02:00
// Method Description:
// - Formats and writes a sequence to change the overline of the following text.
// Arguments:
// - isOverlined: If true, we'll overline the text. Otherwise we'll remove the overline.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetOverlined(const bool isOverlined) noexcept
Add support for the "overline" graphic rendition attribute (#6754) ## Summary of the Pull Request This PR adds support for the `SGR 53` and `SGR 55` escapes sequences, which enable and disable the ANSI _overline_ graphic rendition attribute, the equivalent of the console character attribute `COMMON_LVB_GRID_HORIZONTAL`. When a character is output with this attribute set, a horizontal line is rendered at the top of the character cell. ## PR Checklist * [x] Closes #6000 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. ## Detailed Description of the Pull Request / Additional comments To start with, I added `SetOverline` and `IsOverlined` methods to the `TextAttribute` class, to set and get the legacy `COMMON_LVB_GRID_HORIZONTAL` attribute. Technically there was already an `IsTopHorizontalDisplayed` method, but I thought it more readable to add a separate `IsOverlined` as an alias for that. Then it was just a matter of adding calls to set and reset the attribute in response to the `SGR 53` and `SGR 55` sequences in the `SetGraphicsRendition` methods of the two dispatchers. The actual rendering was already taken care of by the `PaintBufferGridLines` method in the rendering engines. The only other change required was to update the `_UpdateExtendedAttrs` method in the `Xterm256Engine` of the VT renderer, to ensure the attribute state would be forwarded to the Windows Terminal over conpty. ## Validation Steps Performed I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `OutputEngineTest`, and the `VtRendererTest`. I've also manually tested the `SGR 53` and `SGR 55` sequences to confirm that they do actually render (or remove) an overline on the characters being output.
2020-07-06 16:11:17 +02:00
{
return _Write(isOverlined ? "\x1b[53m" : "\x1b[55m");
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the italics of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isItalic: If true, we'll italicize the text. Otherwise we'll remove the italics.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetItalic(const bool isItalic) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isItalic ? "\x1b[3m" : "\x1b[23m");
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the blinking of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isBlinking: If true, we'll start the text blinking. Otherwise we'll stop the blinking.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
[[nodiscard]] HRESULT VtEngine::_SetBlinking(const bool isBlinking) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isBlinking ? "\x1b[5m" : "\x1b[25m");
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the visibility of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isInvisible: If true, we'll make the text invisible. Otherwise we'll make it visible.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
[[nodiscard]] HRESULT VtEngine::_SetInvisible(const bool isInvisible) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isInvisible ? "\x1b[8m" : "\x1b[28m");
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the crossed out state of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isCrossedOut: If true, we'll cross out the text. Otherwise we'll stop crossing out.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
[[nodiscard]] HRESULT VtEngine::_SetCrossedOut(const bool isCrossedOut) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isCrossedOut ? "\x1b[9m" : "\x1b[29m");
}
// Method Description:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - Formats and writes a sequence to change the reversed state of the following text.
// Arguments:
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// - isReversed: If true, we'll reverse the text. Otherwise we'll remove the reversed state.
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
[[nodiscard]] HRESULT VtEngine::_SetReverseVideo(const bool isReversed) noexcept
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
return _Write(isReversed ? "\x1b[7m" : "\x1b[27m");
}
// Method Description:
// - Send a sequence to the connected terminal to request win32-input-mode from
// them. This will enable the connected terminal to send us full INPUT_RECORDs
// as input. If the terminal doesn't understand this sequence, it'll just
// ignore it.
// Arguments:
// - <none>
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_RequestWin32Input() noexcept
{
return _Write("\x1b[?9001h");
}
OSC 8 support for conhost and terminal (#7251) <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Conhost can now support OSC8 sequences (as specified [here](https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda)). Terminal also supports those sequences and additionally hyperlinks can be opened by Ctrl+LeftClicking on them. <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References #204 <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [X] Closes #204 * [ ] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments Added support to: - parse OSC8 sequences and extract URIs from them (conhost and terminal) - add hyperlink uri data to textbuffer/screeninformation, associated with a hyperlink id (conhost and terminal) - attach hyperlink ids to text to allow for uri extraction from the textbuffer/screeninformation (conhost and terminal) - process ctrl+leftclick to open a hyperlink in the clicked region if present <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed Open up a PowerShell tab and type ```PowerShell ${ESC}=[char]27 Write-Host "${ESC}]8;;https://github.com/microsoft/terminal${ESC}\This is a link!${ESC}]8;;${ESC}\" ``` Ctrl+LeftClick on the link correctly brings you to the terminal page on github ![hyperlink](https://user-images.githubusercontent.com/26824113/89953536-45a6f580-dbfd-11ea-8e0d-8a3cd25c634a.gif)
2020-09-03 19:52:39 +02:00
// Method Description:
// - Formats and writes a sequence to add a hyperlink to the terminal buffer
// Arguments:
// - The hyperlink URI
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_SetHyperlink(const std::wstring_view& uri, const std::wstring_view& customId, const uint16_t& numberId) noexcept
{
// Opening OSC8 sequence
if (customId.empty())
{
// This is the case of auto-assigned IDs:
// send the auto-assigned ID, prefixed with the PID of this session
// (we do this so different conpty sessions do not overwrite each other's hyperlinks)
const auto sessionID = GetCurrentProcessId();
const auto uriStr = til::u16u8(uri);
return _WriteFormatted(FMT_COMPILE("\x1b]8;id={}-{};{}\x1b\\"), sessionID, numberId, uriStr);
OSC 8 support for conhost and terminal (#7251) <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Conhost can now support OSC8 sequences (as specified [here](https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda)). Terminal also supports those sequences and additionally hyperlinks can be opened by Ctrl+LeftClicking on them. <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References #204 <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [X] Closes #204 * [ ] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments Added support to: - parse OSC8 sequences and extract URIs from them (conhost and terminal) - add hyperlink uri data to textbuffer/screeninformation, associated with a hyperlink id (conhost and terminal) - attach hyperlink ids to text to allow for uri extraction from the textbuffer/screeninformation (conhost and terminal) - process ctrl+leftclick to open a hyperlink in the clicked region if present <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed Open up a PowerShell tab and type ```PowerShell ${ESC}=[char]27 Write-Host "${ESC}]8;;https://github.com/microsoft/terminal${ESC}\This is a link!${ESC}]8;;${ESC}\" ``` Ctrl+LeftClick on the link correctly brings you to the terminal page on github ![hyperlink](https://user-images.githubusercontent.com/26824113/89953536-45a6f580-dbfd-11ea-8e0d-8a3cd25c634a.gif)
2020-09-03 19:52:39 +02:00
}
else
{
// This is the case of user-defined IDs:
// send the user-defined ID, prefixed with a "u"
// (we do this so no application can accidentally override a user defined ID)
const auto uriStr = til::u16u8(uri);
const auto customIdStr = til::u16u8(customId);
return _WriteFormatted(FMT_COMPILE("\x1b]8;id=u-{};{}\x1b\\"), customIdStr, uriStr);
OSC 8 support for conhost and terminal (#7251) <!-- Enter a brief description/summary of your PR here. What does it fix/what does it change/how was it tested (even manually, if necessary)? --> ## Summary of the Pull Request Conhost can now support OSC8 sequences (as specified [here](https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda)). Terminal also supports those sequences and additionally hyperlinks can be opened by Ctrl+LeftClicking on them. <!-- Other than the issue solved, is this relevant to any other issues/existing PRs? --> ## References #204 <!-- Please review the items on the PR checklist before submitting--> ## PR Checklist * [X] Closes #204 * [ ] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [ ] Tests added/passed * [ ] Documentation updated. If checked, please file a pull request on [our docs repo](https://github.com/MicrosoftDocs/terminal) and link it here: #xxx * [ ] Schema updated. * [ ] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx <!-- Provide a more detailed description of the PR, other things fixed or any additional comments/features here --> ## Detailed Description of the Pull Request / Additional comments Added support to: - parse OSC8 sequences and extract URIs from them (conhost and terminal) - add hyperlink uri data to textbuffer/screeninformation, associated with a hyperlink id (conhost and terminal) - attach hyperlink ids to text to allow for uri extraction from the textbuffer/screeninformation (conhost and terminal) - process ctrl+leftclick to open a hyperlink in the clicked region if present <!-- Describe how you validated the behavior. Add automated tests wherever possible, but list manual validation steps taken as well --> ## Validation Steps Performed Open up a PowerShell tab and type ```PowerShell ${ESC}=[char]27 Write-Host "${ESC}]8;;https://github.com/microsoft/terminal${ESC}\This is a link!${ESC}]8;;${ESC}\" ``` Ctrl+LeftClick on the link correctly brings you to the terminal page on github ![hyperlink](https://user-images.githubusercontent.com/26824113/89953536-45a6f580-dbfd-11ea-8e0d-8a3cd25c634a.gif)
2020-09-03 19:52:39 +02:00
}
}
// Method Description:
// - Formats and writes a sequence to end a hyperlink to the terminal buffer
// Return Value:
// - S_OK if we succeeded, else an appropriate HRESULT for failing to allocate or write.
[[nodiscard]] HRESULT VtEngine::_EndHyperlink() noexcept
{
// Closing OSC8 sequence
return _Write("\x1b]8;;\x1b\\");
}