terminal/src/host/ut_host/VtRendererTests.cpp

1614 lines
66 KiB
C++
Raw Normal View History

// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
#include "precomp.h"
#include <wextestclass.h>
#include "../../inc/consoletaeftemplates.hpp"
#include "../../types/inc/Viewport.hpp"
#include "../../renderer/vt/Xterm256Engine.hpp"
#include "../../renderer/vt/XtermEngine.hpp"
#include "../Settings.hpp"
using namespace WEX::Common;
using namespace WEX::Logging;
using namespace WEX::TestExecution;
namespace Microsoft
{
namespace Console
{
namespace Render
{
class VtRendererTest;
};
};
};
using namespace Microsoft::Console;
using namespace Microsoft::Console::Render;
using namespace Microsoft::Console::Types;
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
using namespace Microsoft::Console::VirtualTerminal::DispatchTypes;
static const std::string CLEAR_SCREEN = "\x1b[2J";
static const std::string CURSOR_HOME = "\x1b[H";
// Sometimes when we're expecting the renderengine to not write anything,
// we'll add this to the expected input, and manually write this to the callback
// to make sure nothing else gets written.
// We don't use null because that will confuse the VERIFY macros re: string length.
const char* const EMPTY_CALLBACK_SENTINEL = "\xff";
class Microsoft::Console::Render::VtRendererTest
{
TEST_CLASS(VtRendererTest);
TEST_CLASS_SETUP(ClassSetup)
{
return true;
}
TEST_CLASS_CLEANUP(ClassCleanup)
{
return true;
}
TEST_METHOD_SETUP(MethodSetup)
{
qExpectedInput.clear();
return true;
}
// Defining a TEST_METHOD_CLEANUP seemed to break x86 test pass. Not sure why,
// something about the clipboard tests and
// YOU_CAN_ONLY_DESIGNATE_ONE_CLASS_METHOD_TO_BE_A_TEST_METHOD_SETUP_METHOD
// It's probably more correct to leave it out anyways.
TEST_METHOD(VtSequenceHelperTests);
TEST_METHOD(Xterm256TestInvalidate);
TEST_METHOD(Xterm256TestColors);
TEST_METHOD(Xterm256TestCursor);
TEST_METHOD(Xterm256TestExtendedAttributes);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
TEST_METHOD(Xterm256TestAttributesAcrossReset);
TEST_METHOD(XtermTestInvalidate);
TEST_METHOD(XtermTestColors);
TEST_METHOD(XtermTestCursor);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
TEST_METHOD(XtermTestAttributesAcrossReset);
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
TEST_METHOD(FormattedString);
TEST_METHOD(TestWrapping);
TEST_METHOD(TestResize);
TEST_METHOD(TestCursorVisibility);
void Test16Colors(VtEngine* engine);
std::deque<std::string> qExpectedInput;
bool WriteCallback(const char* const pch, size_t const cch);
void TestPaint(VtEngine& engine, std::function<void()> pfn);
Viewport SetUpViewport();
void VerifyExpectedInputsDrained();
};
Viewport VtRendererTest::SetUpViewport()
{
SMALL_RECT view = {};
view.Top = view.Left = 0;
view.Bottom = 31;
view.Right = 79;
return Viewport::FromInclusive(view);
}
void VtRendererTest::VerifyExpectedInputsDrained()
{
if (!qExpectedInput.empty())
{
for (const auto& exp : qExpectedInput)
{
Log::Error(NoThrowString().Format(L"EXPECTED INPUT NEVER RECEIVED: %hs", exp.c_str()));
}
VERIFY_FAIL(L"there should be no remaining un-drained expected input");
}
}
bool VtRendererTest::WriteCallback(const char* const pch, size_t const cch)
{
std::string actualString = std::string(pch, cch);
VERIFY_IS_GREATER_THAN(qExpectedInput.size(),
static_cast<size_t>(0),
NoThrowString().Format(L"writing=\"%hs\", expecting %u strings", actualString.c_str(), qExpectedInput.size()));
std::string first = qExpectedInput.front();
qExpectedInput.pop_front();
Log::Comment(NoThrowString().Format(L"Expected =\t\"%hs\"", first.c_str()));
Log::Comment(NoThrowString().Format(L"Actual =\t\"%hs\"", actualString.c_str()));
VERIFY_ARE_EQUAL(first.length(), cch);
VERIFY_ARE_EQUAL(first, actualString);
return true;
}
// Function Description:
// - Small helper to do a series of testing wrapped by StartPaint/EndPaint calls
// Arguments:
// - engine: the engine to operate on
// - pfn: A function pointer to some test code to run.
// Return Value:
// - <none>
void VtRendererTest::TestPaint(VtEngine& engine, std::function<void()> pfn)
{
VERIFY_SUCCEEDED(engine.StartPaint());
pfn();
VERIFY_SUCCEEDED(engine.EndPaint());
}
void VtRendererTest::VtSequenceHelperTests()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
qExpectedInput.push_back("\x1b[?12l");
VERIFY_SUCCEEDED(engine->_StopCursorBlinking());
qExpectedInput.push_back("\x1b[?12h");
VERIFY_SUCCEEDED(engine->_StartCursorBlinking());
qExpectedInput.push_back("\x1b[?25l");
VERIFY_SUCCEEDED(engine->_HideCursor());
qExpectedInput.push_back("\x1b[?25h");
VERIFY_SUCCEEDED(engine->_ShowCursor());
qExpectedInput.push_back("\x1b[K");
VERIFY_SUCCEEDED(engine->_EraseLine());
qExpectedInput.push_back("\x1b[M");
VERIFY_SUCCEEDED(engine->_DeleteLine(1));
qExpectedInput.push_back("\x1b[2M");
VERIFY_SUCCEEDED(engine->_DeleteLine(2));
qExpectedInput.push_back("\x1b[L");
VERIFY_SUCCEEDED(engine->_InsertLine(1));
qExpectedInput.push_back("\x1b[2L");
VERIFY_SUCCEEDED(engine->_InsertLine(2));
qExpectedInput.push_back("\x1b[2X");
VERIFY_SUCCEEDED(engine->_EraseCharacter(2));
qExpectedInput.push_back("\x1b[2;3H");
VERIFY_SUCCEEDED(engine->_CursorPosition({ 2, 1 }));
qExpectedInput.push_back("\x1b[1;1H");
VERIFY_SUCCEEDED(engine->_CursorPosition({ 0, 0 }));
qExpectedInput.push_back("\x1b[H");
VERIFY_SUCCEEDED(engine->_CursorHome());
qExpectedInput.push_back("\x1b[8;32;80t");
VERIFY_SUCCEEDED(engine->_ResizeWindow(80, 32));
qExpectedInput.push_back("\x1b[2J");
VERIFY_SUCCEEDED(engine->_ClearScreen());
qExpectedInput.push_back("\x1b[10C");
VERIFY_SUCCEEDED(engine->_CursorForward(10));
}
void VtRendererTest::Xterm256TestInvalidate()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Make sure that invalidating all invalidates the whole viewport."));
VERIFY_SUCCEEDED(engine->InvalidateAll());
TestPaint(*engine, [&]() {
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
VERIFY_IS_TRUE(engine->_invalidMap.all());
});
Log::Comment(NoThrowString().Format(
L"Make sure that invalidating anything only invalidates that portion"));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
SMALL_RECT invalid = { 1, 1, 2, 2 };
VERIFY_SUCCEEDED(engine->Invalidate(&invalid));
TestPaint(*engine, [&]() {
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
VERIFY_IS_TRUE(engine->_invalidMap.one());
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, *(engine->_invalidMap.begin()));
});
Log::Comment(NoThrowString().Format(
L"Make sure that scrolling only invalidates part of the viewport, and sends the right sequences"));
COORD scrollDelta = { 0, 1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled one down, only top line is invalid. ----"));
invalid = view.ToExclusive();
invalid.Bottom = 1;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(1u, runs.size());
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, runs.front());
qExpectedInput.push_back("\x1b[H"); // Go Home
qExpectedInput.push_back("\x1b[L"); // insert a line
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, 3 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled three down, only top 3 lines are invalid. ----"));
invalid = view.ToExclusive();
invalid.Bottom = 3;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
// we should have 3 runs and build a rectangle out of them
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(3u, runs.size());
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// verify the rect matches the invalid one.
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, invalidRect);
// We would expect a CUP here, but the cursor is already at the home position
qExpectedInput.push_back("\x1b[3L"); // insert 3 lines
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, -1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled one up, only bottom line is invalid. ----"));
invalid = view.ToExclusive();
invalid.Top = invalid.Bottom - 1;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(1u, runs.size());
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, runs.front());
qExpectedInput.push_back("\x1b[32;1H"); // Bottom of buffer
qExpectedInput.push_back("\n"); // Scroll down once
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, -3 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled three up, only bottom 3 lines are invalid. ----"));
invalid = view.ToExclusive();
invalid.Top = invalid.Bottom - 3;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
// we should have 3 runs and build a rectangle out of them
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(3u, runs.size());
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// verify the rect matches the invalid one.
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, invalidRect);
// We would expect a CUP here, but we're already at the bottom from the last call.
qExpectedInput.push_back("\n\n\n"); // Scroll down three times
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
Log::Comment(NoThrowString().Format(
L"Multiple scrolls are coalesced"));
scrollDelta = { 0, 1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
scrollDelta = { 0, 2 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled three down, only top 3 lines are invalid. ----"));
invalid = view.ToExclusive();
invalid.Bottom = 3;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
// we should have 3 runs and build a rectangle out of them
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(3u, runs.size());
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// verify the rect matches the invalid one.
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, invalidRect);
qExpectedInput.push_back("\x1b[H"); // Go to home
qExpectedInput.push_back("\x1b[3L"); // insert 3 lines
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, 1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
Log::Comment(engine->_invalidMap.to_string().c_str());
scrollDelta = { 0, -1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
Log::Comment(engine->_invalidMap.to_string().c_str());
qExpectedInput.push_back("\x1b[2J");
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled one down and one up, nothing should change ----"
L" But it still does for now MSFT:14169294"));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const auto runs = engine->_invalidMap.runs();
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// only the bottom line should be dirty.
// When we scrolled down, the bitmap looked like this:
// 1111
// 0000
// 0000
// 0000
// And then we scrolled up and the top line fell off and a bottom
// line was filled in like this:
// 0000
// 0000
// 0000
// 1111
const til::rectangle expected{ til::point{ view.Left(), view.BottomInclusive() }, til::size{ view.Width(), 1 } };
VERIFY_ARE_EQUAL(expected, invalidRect);
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
}
void VtRendererTest::Xterm256TestColors()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
RenderData renderData;
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Test changing the text attributes"));
Log::Comment(NoThrowString().Format(
L"Begin by setting some test values - FG,BG = (1,2,3), (4,5,6) to start"
L"These values were picked for ease of formatting raw COLORREF values."));
qExpectedInput.push_back("\x1b[38;2;1;2;3m");
qExpectedInput.push_back("\x1b[48;2;5;6;7m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({ 0x00030201, 0x00070605 },
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"----Change only the BG----"));
qExpectedInput.push_back("\x1b[48;2;7;8;9m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({ 0x00030201, 0x00090807 },
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
L"----Change only the FG----"));
qExpectedInput.push_back("\x1b[38;2;10;11;12m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({ 0x000c0b0a, 0x00090807 },
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
});
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Make sure that color setting persists across EndPaint/StartPaint"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({ 0x000c0b0a, 0x00090807 },
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1); // This will make sure nothing was written to the callback
});
// Now also do the body of the 16color test as well.
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
// However, instead of using a closest match ANSI color, we can reproduce
// the exact RGB or 256-color index value stored in the TextAttribute.
Log::Comment(NoThrowString().Format(
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
L"Begin by setting the default colors"));
qExpectedInput.push_back("\x1b[m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({},
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
TestPaint(*engine, [&]() {
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
TextAttribute textAttributes;
Log::Comment(NoThrowString().Format(
L"----Change only the BG----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedBackground(TextColor::DARK_RED);
qExpectedInput.push_back("\x1b[41m"); // Background DARK_RED
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
L"----Change only the FG----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedForeground(TextColor::DARK_WHITE);
qExpectedInput.push_back("\x1b[37m"); // Foreground DARK_WHITE
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
L"----Change only the BG to an RGB value----"));
textAttributes.SetBackground(RGB(19, 161, 14));
qExpectedInput.push_back("\x1b[48;2;19;161;14m"); // Background RGB(19,161,14)
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Change only the FG to an RGB value----"));
textAttributes.SetForeground(RGB(193, 156, 0));
qExpectedInput.push_back("\x1b[38;2;193;156;0m"); // Foreground RGB(193,156,0)
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
L"----Change only the BG to the 'Default' background----"));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
textAttributes.SetDefaultBackground();
qExpectedInput.push_back("\x1b[49m"); // Background default
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
L"----Change only the FG to a 256-color index----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedForeground256(TextColor::DARK_WHITE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[38;5;7m"); // Foreground DARK_WHITE (256-Color Index)
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Change only the BG to a 256-color index----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedBackground256(TextColor::DARK_RED);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[48;5;1m"); // Background DARK_RED (256-Color Index)
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(NoThrowString().Format(
L"----Change only the FG to the 'Default' foreground----"));
textAttributes.SetDefaultForeground();
qExpectedInput.push_back("\x1b[39m"); // Background default
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Back to defaults----"));
textAttributes = {};
qExpectedInput.push_back("\x1b[m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
});
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Make sure that color setting persists across EndPaint/StartPaint"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({},
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1); // This will make sure nothing was written to the callback
});
}
void VtRendererTest::Xterm256TestCursor()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Test moving the cursor around. Every sequence should have both params to CUP explicitly."));
TestPaint(*engine, [&]() {
qExpectedInput.push_back("\x1b[2;2H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 1, 1 }));
Log::Comment(NoThrowString().Format(
L"----Only move Y coord----"));
qExpectedInput.push_back("\x1b[31;2H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 1, 30 }));
Log::Comment(NoThrowString().Format(
L"----Only move X coord----"));
qExpectedInput.push_back("\x1b[29C");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 30, 30 }));
Log::Comment(NoThrowString().Format(
L"----Sending the same move sends nothing----"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 30, 30 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
Log::Comment(NoThrowString().Format(
L"----moving home sends a simple sequence----"));
qExpectedInput.push_back("\x1b[H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 0 }));
Log::Comment(NoThrowString().Format(
L"----move into the line to test some other sequences----"));
qExpectedInput.push_back("\x1b[7C");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 7, 0 }));
Log::Comment(NoThrowString().Format(
L"----move down one line (x stays the same)----"));
qExpectedInput.push_back("\n");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 7, 1 }));
Log::Comment(NoThrowString().Format(
L"----move to the start of the next line----"));
qExpectedInput.push_back("\r\n");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 2 }));
Log::Comment(NoThrowString().Format(
L"----move into the line to test some other sequences----"));
qExpectedInput.push_back("\x1b[2;8H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 7, 1 }));
Log::Comment(NoThrowString().Format(
L"----move to the start of this line (y stays the same)----"));
qExpectedInput.push_back("\r");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 1 }));
});
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Sending the same move across paint calls sends nothing."
L"The cursor's last \"real\" position was 0,0"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 1 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
Log::Comment(NoThrowString().Format(
L"Paint some text at 0,0, then try moving the cursor to where it currently is."));
qExpectedInput.push_back("\x1b[1C");
qExpectedInput.push_back("asdfghjkl");
const wchar_t* const line = L"asdfghjkl";
const unsigned char rgWidths[] = { 1, 1, 1, 1, 1, 1, 1, 1, 1 };
std::vector<Cluster> clusters;
for (size_t i = 0; i < wcslen(line); i++)
{
clusters.emplace_back(std::wstring_view{ &line[i], 1 }, static_cast<size_t>(rgWidths[i]));
}
Make Conpty emit wrapped lines as actually wrapped lines (#4415) ## Summary of the Pull Request Changes how conpty emits text to preserve line-wrap state, and additionally adds rudimentary support to the Windows Terminal for wrapped lines. ## References * Does _not_ fix (!) #3088, but that might be lower down in conhost. This makes wt behave like conhost, so at least there's that * Still needs a proper deferred EOL wrap implementation in #780, which is left as a todo * #4200 is the mega bucket with all this work * MSFT:16485846 was the first attempt at this task, which caused the regression MSFT:18123777 so we backed it out. * #4403 - I made sure this worked with that PR before I even sent #4403 ## PR Checklist * [x] Closes #405 * [x] Closes #3367 * [x] I work here * [x] Tests added/passed * [n/a] Requires documentation to be updated ## Detailed Description of the Pull Request / Additional comments I started with the following implementation: When conpty is about to write the last column, note that we wrapped this line here. If the next character the vt renderer is told to paint get is supposed to be at the start of the following line, then we know that the previous line had wrapped, so we _won't_ emit the usual `\r\n` here, and we'll just continue emitting text. However, this isn't _exactly_ right - if someone fills the row _exactly_ with text, the information that's available to the vt renderer isn't enough to know for sure if this line broke or not. It is possible for the client to write a full line of text, with a `\n` at the end, to manually break the line. So, I had to also add the `lineWrapped` param to the `IRenderEngine` interface, which is about half the files in this changelist. ## Validation Steps Performed * Ran tests * Checked how the Windows Terminal behaves with these changes * Made sure that conhost/inception and gnome-terminal both act as you'd expect with wrapped lines from conpty
2020-02-27 17:40:11 +01:00
VERIFY_SUCCEEDED(engine->PaintBufferLine({ clusters.data(), clusters.size() }, { 1, 1 }, false, false));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 10, 1 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
});
// Note that only PaintBufferLine updates the "Real" cursor position, which
// the cursor is moved back to at the end of each paint
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Sending the same move across paint calls sends nothing."));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 10, 1 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
});
}
void VtRendererTest::Xterm256TestExtendedAttributes()
{
// Run this test for each and every possible combination of states.
BEGIN_TEST_METHOD_PROPERTIES()
Add support for the "faint" graphic rendition attribute (#6873) ## Summary of the Pull Request This PR adds support for the `SGR 2` escape sequence, which enables the ANSI _faint_ graphic rendition attribute. When a character is output with this attribute set, it uses a dimmer version of the active foreground color. ## PR Checklist * [x] Closes #6703 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #6703 ## Detailed Description of the Pull Request / Additional comments There was already an `ExtendedAttributes::Faint` flag in the `TextAttribute` class, but I needed to add `SetFaint` and `IsFaint` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 2` sequence. I also had to update the existing `SGR 22` handler to reset _Faint_ in addition to _Bold_, since they share the same reset sequence. For that reason, I thought it a good idea to change the name of the `SGR 22` enum to `NotBoldOrFaint`. For the purpose of rendering, I've updated the `TextAttribute::CalculateRgbColors` method to return a dimmer version of the foreground color when the _Faint_ attribute is set. This is simply achieved by dividing each color component by two, which produces a reasonable effect without being too complicated. Note that the _Faint_ effect is applied before _Reverse Video_, so if the output it reversed, it's the background that will be faint. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, _Bold_ and _Faint_ share the same reset sequence, so to forward that state over conpty we have to go through a slightly more complicated process than with other attributes. We first check whether either attribute needs to be turned off to send the reset sequence, and then check if the individual attributes need to be turned on again. ## Validation I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `ScreenBufferTests`, and the `VtRendererTest`, and added a test to confirm the color calculations when _Faint_ is set in the `TextAttributeTests`. I've also done a bunch of manual testing with all the different VT color types and confirmed that our output is comparable to most other terminals.
2020-07-13 19:44:09 +02:00
TEST_METHOD_PROPERTY(L"Data:faint", L"{false, true}")
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
TEST_METHOD_PROPERTY(L"Data:underlined", L"{false, true}")
TEST_METHOD_PROPERTY(L"Data:doublyUnderlined", L"{false, true}")
TEST_METHOD_PROPERTY(L"Data:italics", L"{false, true}")
TEST_METHOD_PROPERTY(L"Data:blink", L"{false, true}")
TEST_METHOD_PROPERTY(L"Data:invisible", L"{false, true}")
TEST_METHOD_PROPERTY(L"Data:crossedOut", L"{false, true}")
END_TEST_METHOD_PROPERTIES()
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
bool faint, underlined, doublyUnderlined, italics, blink, invisible, crossedOut;
Add support for the "faint" graphic rendition attribute (#6873) ## Summary of the Pull Request This PR adds support for the `SGR 2` escape sequence, which enables the ANSI _faint_ graphic rendition attribute. When a character is output with this attribute set, it uses a dimmer version of the active foreground color. ## PR Checklist * [x] Closes #6703 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #6703 ## Detailed Description of the Pull Request / Additional comments There was already an `ExtendedAttributes::Faint` flag in the `TextAttribute` class, but I needed to add `SetFaint` and `IsFaint` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 2` sequence. I also had to update the existing `SGR 22` handler to reset _Faint_ in addition to _Bold_, since they share the same reset sequence. For that reason, I thought it a good idea to change the name of the `SGR 22` enum to `NotBoldOrFaint`. For the purpose of rendering, I've updated the `TextAttribute::CalculateRgbColors` method to return a dimmer version of the foreground color when the _Faint_ attribute is set. This is simply achieved by dividing each color component by two, which produces a reasonable effect without being too complicated. Note that the _Faint_ effect is applied before _Reverse Video_, so if the output it reversed, it's the background that will be faint. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, _Bold_ and _Faint_ share the same reset sequence, so to forward that state over conpty we have to go through a slightly more complicated process than with other attributes. We first check whether either attribute needs to be turned off to send the reset sequence, and then check if the individual attributes need to be turned on again. ## Validation I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `ScreenBufferTests`, and the `VtRendererTest`, and added a test to confirm the color calculations when _Faint_ is set in the `TextAttributeTests`. I've also done a bunch of manual testing with all the different VT color types and confirmed that our output is comparable to most other terminals.
2020-07-13 19:44:09 +02:00
VERIFY_SUCCEEDED(TestData::TryGetValue(L"faint", faint));
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
VERIFY_SUCCEEDED(TestData::TryGetValue(L"underlined", underlined));
VERIFY_SUCCEEDED(TestData::TryGetValue(L"doublyUnderlined", doublyUnderlined));
VERIFY_SUCCEEDED(TestData::TryGetValue(L"italics", italics));
VERIFY_SUCCEEDED(TestData::TryGetValue(L"blink", blink));
VERIFY_SUCCEEDED(TestData::TryGetValue(L"invisible", invisible));
VERIFY_SUCCEEDED(TestData::TryGetValue(L"crossedOut", crossedOut));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
TextAttribute desiredAttrs;
std::vector<std::string> onSequences, offSequences;
// Collect up a VT sequence to set the state given the method properties
Add support for the "faint" graphic rendition attribute (#6873) ## Summary of the Pull Request This PR adds support for the `SGR 2` escape sequence, which enables the ANSI _faint_ graphic rendition attribute. When a character is output with this attribute set, it uses a dimmer version of the active foreground color. ## PR Checklist * [x] Closes #6703 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #6703 ## Detailed Description of the Pull Request / Additional comments There was already an `ExtendedAttributes::Faint` flag in the `TextAttribute` class, but I needed to add `SetFaint` and `IsFaint` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 2` sequence. I also had to update the existing `SGR 22` handler to reset _Faint_ in addition to _Bold_, since they share the same reset sequence. For that reason, I thought it a good idea to change the name of the `SGR 22` enum to `NotBoldOrFaint`. For the purpose of rendering, I've updated the `TextAttribute::CalculateRgbColors` method to return a dimmer version of the foreground color when the _Faint_ attribute is set. This is simply achieved by dividing each color component by two, which produces a reasonable effect without being too complicated. Note that the _Faint_ effect is applied before _Reverse Video_, so if the output it reversed, it's the background that will be faint. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, _Bold_ and _Faint_ share the same reset sequence, so to forward that state over conpty we have to go through a slightly more complicated process than with other attributes. We first check whether either attribute needs to be turned off to send the reset sequence, and then check if the individual attributes need to be turned on again. ## Validation I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `ScreenBufferTests`, and the `VtRendererTest`, and added a test to confirm the color calculations when _Faint_ is set in the `TextAttributeTests`. I've also done a bunch of manual testing with all the different VT color types and confirmed that our output is comparable to most other terminals.
2020-07-13 19:44:09 +02:00
if (faint)
{
desiredAttrs.SetFaint(true);
onSequences.push_back("\x1b[2m");
offSequences.push_back("\x1b[22m");
}
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
if (underlined)
{
desiredAttrs.SetUnderlined(true);
onSequences.push_back("\x1b[4m");
offSequences.push_back("\x1b[24m");
}
if (doublyUnderlined)
{
desiredAttrs.SetDoublyUnderlined(true);
onSequences.push_back("\x1b[21m");
// The two underlines share the same off sequence, so we
// only add it here if that hasn't already been done.
if (!underlined)
{
offSequences.push_back("\x1b[24m");
}
}
if (italics)
{
desiredAttrs.SetItalic(true);
onSequences.push_back("\x1b[3m");
offSequences.push_back("\x1b[23m");
}
if (blink)
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
desiredAttrs.SetBlinking(true);
onSequences.push_back("\x1b[5m");
offSequences.push_back("\x1b[25m");
}
if (invisible)
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
desiredAttrs.SetInvisible(true);
onSequences.push_back("\x1b[8m");
offSequences.push_back("\x1b[28m");
}
if (crossedOut)
{
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
desiredAttrs.SetCrossedOut(true);
onSequences.push_back("\x1b[9m");
offSequences.push_back("\x1b[29m");
}
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Test changing the text attributes"));
Log::Comment(NoThrowString().Format(
L"----Turn the extended attributes on----"));
TestPaint(*engine, [&]() {
// Merge the "on" sequences into expected input.
std::copy(onSequences.cbegin(), onSequences.cend(), std::back_inserter(qExpectedInput));
VERIFY_SUCCEEDED(engine->_UpdateExtendedAttrs(desiredAttrs));
});
Log::Comment(NoThrowString().Format(
L"----Turn the extended attributes off----"));
TestPaint(*engine, [&]() {
std::copy(offSequences.cbegin(), offSequences.cend(), std::back_inserter(qExpectedInput));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->_UpdateExtendedAttrs({}));
});
Log::Comment(NoThrowString().Format(
L"----Turn the extended attributes back on----"));
TestPaint(*engine, [&]() {
std::copy(onSequences.cbegin(), onSequences.cend(), std::back_inserter(qExpectedInput));
VERIFY_SUCCEEDED(engine->_UpdateExtendedAttrs(desiredAttrs));
});
VerifyExpectedInputsDrained();
}
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
void VtRendererTest::Xterm256TestAttributesAcrossReset()
{
BEGIN_TEST_METHOD_PROPERTIES()
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
TEST_METHOD_PROPERTY(L"Data:renditionAttribute", L"{1, 2, 3, 4, 5, 7, 8, 9, 21, 53}")
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
END_TEST_METHOD_PROPERTIES()
int renditionAttribute;
VERIFY_SUCCEEDED(TestData::TryGetValue(L"renditionAttribute", renditionAttribute));
std::stringstream renditionSequence;
renditionSequence << "\x1b[" << renditionAttribute << "m";
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
RenderData renderData;
Log::Comment(L"Make sure rendition attributes are retained when colors are reset");
Log::Comment(L"----Start With All Attributes Reset----");
TextAttribute textAttributes = {};
qExpectedInput.push_back("\x1b[m");
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
switch (renditionAttribute)
{
case GraphicsOptions::BoldBright:
Log::Comment(L"----Set Bold Attribute----");
textAttributes.SetBold(true);
break;
Add support for the "faint" graphic rendition attribute (#6873) ## Summary of the Pull Request This PR adds support for the `SGR 2` escape sequence, which enables the ANSI _faint_ graphic rendition attribute. When a character is output with this attribute set, it uses a dimmer version of the active foreground color. ## PR Checklist * [x] Closes #6703 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #6703 ## Detailed Description of the Pull Request / Additional comments There was already an `ExtendedAttributes::Faint` flag in the `TextAttribute` class, but I needed to add `SetFaint` and `IsFaint` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 2` sequence. I also had to update the existing `SGR 22` handler to reset _Faint_ in addition to _Bold_, since they share the same reset sequence. For that reason, I thought it a good idea to change the name of the `SGR 22` enum to `NotBoldOrFaint`. For the purpose of rendering, I've updated the `TextAttribute::CalculateRgbColors` method to return a dimmer version of the foreground color when the _Faint_ attribute is set. This is simply achieved by dividing each color component by two, which produces a reasonable effect without being too complicated. Note that the _Faint_ effect is applied before _Reverse Video_, so if the output it reversed, it's the background that will be faint. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, _Bold_ and _Faint_ share the same reset sequence, so to forward that state over conpty we have to go through a slightly more complicated process than with other attributes. We first check whether either attribute needs to be turned off to send the reset sequence, and then check if the individual attributes need to be turned on again. ## Validation I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `ScreenBufferTests`, and the `VtRendererTest`, and added a test to confirm the color calculations when _Faint_ is set in the `TextAttributeTests`. I've also done a bunch of manual testing with all the different VT color types and confirmed that our output is comparable to most other terminals.
2020-07-13 19:44:09 +02:00
case GraphicsOptions::RGBColorOrFaint:
Log::Comment(L"----Set Faint Attribute----");
textAttributes.SetFaint(true);
break;
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
case GraphicsOptions::Italics:
Log::Comment(L"----Set Italics Attribute----");
textAttributes.SetItalic(true);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
break;
case GraphicsOptions::Underline:
Log::Comment(L"----Set Underline Attribute----");
textAttributes.SetUnderlined(true);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
break;
Add support for the "doubly underlined" graphic rendition attribute (#7223) This PR adds support for the ANSI _doubly underlined_ graphic rendition attribute, which is enabled by the `SGR 21` escape sequence. There was already an `ExtendedAttributes::DoublyUnderlined` flag in the `TextAttribute` class, but I needed to add `SetDoublyUnderlined` and `IsDoublyUnderlined` methods to access that flag, and update the `SetGraphicsRendition` methods of the two dispatchers to set the attribute on receipt of the `SGR 21` sequence. I also had to update the existing `SGR 24` handler to reset _DoublyUnderlined_ in addition to _Underlined_, since they share the same reset sequence. For the rendering, I've added a new grid line type, which essentially just draws an additional line with the same thickness as the regular underline, but slightly below it - I found a gap of around 0.05 "em" between the lines looked best. If there isn't enough space in the cell for that gap, the second line will be clamped to overlap the first, so you then just get a thicker line. If there isn't even enough space below for a thicker line, we move the offset _above_ the first line, but just enough to make it thicker. The only other complication was the update of the `Xterm256Engine` in the VT renderer. As mentioned above, the two underline attributes share the same reset sequence, so to forward that state over conpty we require a slightly more complicated process than with most other attributes (similar to _Bold_ and _Faint_). We first check whether either underline attribute needs to be turned off to send the reset sequence, and then check individually if each of them needs to be turned back on again. ## Validation Steps Performed For testing, I've extended the existing attribute tests in `AdapterTest`, `VTRendererTest`, and `ScreenBufferTests`, to make sure we're covering both the _Underlined_ and _DoublyUnderlined_ attributes. I've also manually tested the `SGR 21` sequence in conhost and Windows Terminal, with a variety of fonts and font sizes, to make sure the rendering was reasonably distinguishable from a single underline. Closes #2916
2020-08-10 19:06:16 +02:00
case GraphicsOptions::DoublyUnderlined:
Log::Comment(L"----Set Doubly Underlined Attribute----");
textAttributes.SetDoublyUnderlined(true);
break;
Add support for the "overline" graphic rendition attribute (#6754) ## Summary of the Pull Request This PR adds support for the `SGR 53` and `SGR 55` escapes sequences, which enable and disable the ANSI _overline_ graphic rendition attribute, the equivalent of the console character attribute `COMMON_LVB_GRID_HORIZONTAL`. When a character is output with this attribute set, a horizontal line is rendered at the top of the character cell. ## PR Checklist * [x] Closes #6000 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. ## Detailed Description of the Pull Request / Additional comments To start with, I added `SetOverline` and `IsOverlined` methods to the `TextAttribute` class, to set and get the legacy `COMMON_LVB_GRID_HORIZONTAL` attribute. Technically there was already an `IsTopHorizontalDisplayed` method, but I thought it more readable to add a separate `IsOverlined` as an alias for that. Then it was just a matter of adding calls to set and reset the attribute in response to the `SGR 53` and `SGR 55` sequences in the `SetGraphicsRendition` methods of the two dispatchers. The actual rendering was already taken care of by the `PaintBufferGridLines` method in the rendering engines. The only other change required was to update the `_UpdateExtendedAttrs` method in the `Xterm256Engine` of the VT renderer, to ensure the attribute state would be forwarded to the Windows Terminal over conpty. ## Validation Steps Performed I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `OutputEngineTest`, and the `VtRendererTest`. I've also manually tested the `SGR 53` and `SGR 55` sequences to confirm that they do actually render (or remove) an overline on the characters being output.
2020-07-06 16:11:17 +02:00
case GraphicsOptions::Overline:
Log::Comment(L"----Set Overline Attribute----");
textAttributes.SetOverlined(true);
Add support for the "overline" graphic rendition attribute (#6754) ## Summary of the Pull Request This PR adds support for the `SGR 53` and `SGR 55` escapes sequences, which enable and disable the ANSI _overline_ graphic rendition attribute, the equivalent of the console character attribute `COMMON_LVB_GRID_HORIZONTAL`. When a character is output with this attribute set, a horizontal line is rendered at the top of the character cell. ## PR Checklist * [x] Closes #6000 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. ## Detailed Description of the Pull Request / Additional comments To start with, I added `SetOverline` and `IsOverlined` methods to the `TextAttribute` class, to set and get the legacy `COMMON_LVB_GRID_HORIZONTAL` attribute. Technically there was already an `IsTopHorizontalDisplayed` method, but I thought it more readable to add a separate `IsOverlined` as an alias for that. Then it was just a matter of adding calls to set and reset the attribute in response to the `SGR 53` and `SGR 55` sequences in the `SetGraphicsRendition` methods of the two dispatchers. The actual rendering was already taken care of by the `PaintBufferGridLines` method in the rendering engines. The only other change required was to update the `_UpdateExtendedAttrs` method in the `Xterm256Engine` of the VT renderer, to ensure the attribute state would be forwarded to the Windows Terminal over conpty. ## Validation Steps Performed I've extended the existing SGR unit tests to cover the new attribute in the `AdapterTest`, the `OutputEngineTest`, and the `VtRendererTest`. I've also manually tested the `SGR 53` and `SGR 55` sequences to confirm that they do actually render (or remove) an overline on the characters being output.
2020-07-06 16:11:17 +02:00
break;
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
case GraphicsOptions::BlinkOrXterm256Index:
Log::Comment(L"----Set Blink Attribute----");
textAttributes.SetBlinking(true);
break;
case GraphicsOptions::Negative:
Log::Comment(L"----Set Negative Attribute----");
textAttributes.SetReverseVideo(true);
break;
case GraphicsOptions::Invisible:
Log::Comment(L"----Set Invisible Attribute----");
textAttributes.SetInvisible(true);
break;
case GraphicsOptions::CrossedOut:
Log::Comment(L"----Set Crossed Out Attribute----");
textAttributes.SetCrossedOut(true);
break;
}
qExpectedInput.push_back(renditionSequence.str());
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Set Green Foreground----");
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedForeground(TextColor::DARK_GREEN);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[32m");
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Reset Default Foreground and Retain Rendition----");
textAttributes.SetDefaultForeground();
qExpectedInput.push_back("\x1b[m");
qExpectedInput.push_back(renditionSequence.str());
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Set Green Background----");
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedBackground(TextColor::DARK_GREEN);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[42m");
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Reset Default Background and Retain Rendition----");
textAttributes.SetDefaultBackground();
qExpectedInput.push_back("\x1b[m");
qExpectedInput.push_back(renditionSequence.str());
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VerifyExpectedInputsDrained();
}
void VtRendererTest::XtermTestInvalidate()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<XtermEngine> engine = std::make_unique<XtermEngine>(std::move(hFile), SetUpViewport(), false);
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Make sure that invalidating all invalidates the whole viewport."));
VERIFY_SUCCEEDED(engine->InvalidateAll());
TestPaint(*engine, [&]() {
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
VERIFY_IS_TRUE(engine->_invalidMap.all());
});
Log::Comment(NoThrowString().Format(
L"Make sure that invalidating anything only invalidates that portion"));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
SMALL_RECT invalid = { 1, 1, 2, 2 };
VERIFY_SUCCEEDED(engine->Invalidate(&invalid));
TestPaint(*engine, [&]() {
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
VERIFY_IS_TRUE(engine->_invalidMap.one());
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, *(engine->_invalidMap.begin()));
});
Log::Comment(NoThrowString().Format(
L"Make sure that scrolling only invalidates part of the viewport, and sends the right sequences"));
COORD scrollDelta = { 0, 1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled one down, only top line is invalid. ----"));
invalid = view.ToExclusive();
invalid.Bottom = 1;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(1u, runs.size());
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, runs.front());
qExpectedInput.push_back("\x1b[H"); // Go Home
qExpectedInput.push_back("\x1b[L"); // insert a line
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, 3 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled three down, only top 3 lines are invalid. ----"));
invalid = view.ToExclusive();
invalid.Bottom = 3;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
// we should have 3 runs and build a rectangle out of them
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(3u, runs.size());
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// verify the rect matches the invalid one.
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, invalidRect);
// We would expect a CUP here, but the cursor is already at the home position
qExpectedInput.push_back("\x1b[3L"); // insert 3 lines
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, -1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled one up, only bottom line is invalid. ----"));
invalid = view.ToExclusive();
invalid.Top = invalid.Bottom - 1;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(1u, runs.size());
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, runs.front());
qExpectedInput.push_back("\x1b[32;1H"); // Bottom of buffer
qExpectedInput.push_back("\n"); // Scroll down once
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, -3 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled three up, only bottom 3 lines are invalid. ----"));
invalid = view.ToExclusive();
invalid.Top = invalid.Bottom - 3;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
// we should have 3 runs and build a rectangle out of them
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(3u, runs.size());
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// verify the rect matches the invalid one.
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, invalidRect);
// We would expect a CUP here, but we're already at the bottom from the last call.
qExpectedInput.push_back("\n\n\n"); // Scroll down three times
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
Log::Comment(NoThrowString().Format(
L"Multiple scrolls are coalesced"));
scrollDelta = { 0, 1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
scrollDelta = { 0, 2 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled three down, only top 3 lines are invalid. ----"));
invalid = view.ToExclusive();
invalid.Bottom = 3;
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
// we should have 3 runs and build a rectangle out of them
const auto runs = engine->_invalidMap.runs();
VERIFY_ARE_EQUAL(3u, runs.size());
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// verify the rect matches the invalid one.
VERIFY_ARE_EQUAL(til::rectangle{ Viewport::FromExclusive(invalid).ToInclusive() }, invalidRect);
qExpectedInput.push_back("\x1b[H"); // Go to home
qExpectedInput.push_back("\x1b[3L"); // insert 3 lines
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
scrollDelta = { 0, 1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
Log::Comment(engine->_invalidMap.to_string().c_str());
scrollDelta = { 0, -1 };
VERIFY_SUCCEEDED(engine->InvalidateScroll(&scrollDelta));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
Log::Comment(engine->_invalidMap.to_string().c_str());
qExpectedInput.push_back("\x1b[2J");
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"---- Scrolled one down and one up, nothing should change ----"
L" But it still does for now MSFT:14169294"));
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
const auto runs = engine->_invalidMap.runs();
auto invalidRect = runs.front();
for (size_t i = 1; i < runs.size(); ++i)
{
invalidRect |= runs[i];
}
// only the bottom line should be dirty.
// When we scrolled down, the bitmap looked like this:
// 1111
// 0000
// 0000
// 0000
// And then we scrolled up and the top line fell off and a bottom
// line was filled in like this:
// 0000
// 0000
// 0000
// 1111
const til::rectangle expected{ til::point{ view.Left(), view.BottomInclusive() }, til::size{ view.Width(), 1 } };
VERIFY_ARE_EQUAL(expected, invalidRect);
VERIFY_SUCCEEDED(engine->ScrollFrame());
});
}
void VtRendererTest::XtermTestColors()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<XtermEngine> engine = std::make_unique<XtermEngine>(std::move(hFile), SetUpViewport(), false);
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
RenderData renderData;
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Test changing the text attributes"));
Log::Comment(NoThrowString().Format(
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
L"Begin by setting the default colors"));
qExpectedInput.push_back("\x1b[m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({},
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
TestPaint(*engine, [&]() {
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
TextAttribute textAttributes;
Log::Comment(NoThrowString().Format(
L"----Change only the BG----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedBackground(TextColor::DARK_RED);
qExpectedInput.push_back("\x1b[41m"); // Background DARK_RED
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
L"----Change only the FG----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedForeground(TextColor::DARK_WHITE);
qExpectedInput.push_back("\x1b[37m"); // Foreground DARK_WHITE
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Change only the BG to an RGB value----"));
textAttributes.SetBackground(RGB(19, 161, 14));
qExpectedInput.push_back("\x1b[42m"); // Background DARK_GREEN
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
L"----Change only the FG to an RGB value----"));
textAttributes.SetForeground(RGB(193, 156, 0));
qExpectedInput.push_back("\x1b[33m"); // Foreground DARK_YELLOW
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Change only the BG to the 'Default' background----"));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
textAttributes.SetDefaultBackground();
qExpectedInput.push_back("\x1b[m"); // Both foreground and background default
qExpectedInput.push_back("\x1b[33m"); // Reapply foreground DARK_YELLOW
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
Log::Comment(NoThrowString().Format(
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
L"----Change only the FG to a 256-color index----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedForeground256(TextColor::DARK_WHITE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[37m"); // Foreground DARK_WHITE
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(NoThrowString().Format(
L"----Change only the BG to a 256-color index----"));
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedBackground256(TextColor::DARK_RED);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[41m"); // Background DARK_RED
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Change only the FG to the 'Default' foreground----"));
textAttributes.SetDefaultForeground();
qExpectedInput.push_back("\x1b[m"); // Both foreground and background default
qExpectedInput.push_back("\x1b[41m"); // Reapply background DARK_RED
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
false));
Log::Comment(NoThrowString().Format(
L"----Back to defaults----"));
textAttributes = {};
qExpectedInput.push_back("\x1b[m");
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes,
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
});
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Make sure that color setting persists across EndPaint/StartPaint"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes({},
&renderData,
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
false,
false));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1); // This will make sure nothing was written to the callback
});
}
void VtRendererTest::XtermTestCursor()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<XtermEngine> engine = std::make_unique<XtermEngine>(std::move(hFile), SetUpViewport(), false);
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
Log::Comment(NoThrowString().Format(
L"Test moving the cursor around. Every sequence should have both params to CUP explicitly."));
TestPaint(*engine, [&]() {
qExpectedInput.push_back("\x1b[2;2H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 1, 1 }));
Log::Comment(NoThrowString().Format(
L"----Only move Y coord----"));
qExpectedInput.push_back("\x1b[31;2H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 1, 30 }));
Log::Comment(NoThrowString().Format(
L"----Only move X coord----"));
qExpectedInput.push_back("\x1b[29C");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 30, 30 }));
Log::Comment(NoThrowString().Format(
L"----Sending the same move sends nothing----"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 30, 30 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
Log::Comment(NoThrowString().Format(
L"----moving home sends a simple sequence----"));
qExpectedInput.push_back("\x1b[H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 0 }));
Log::Comment(NoThrowString().Format(
L"----move into the line to test some other sequences----"));
qExpectedInput.push_back("\x1b[7C");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 7, 0 }));
Log::Comment(NoThrowString().Format(
L"----move down one line (x stays the same)----"));
qExpectedInput.push_back("\n");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 7, 1 }));
Log::Comment(NoThrowString().Format(
L"----move to the start of the next line----"));
qExpectedInput.push_back("\r\n");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 2 }));
Log::Comment(NoThrowString().Format(
L"----move into the line to test some other sequences----"));
qExpectedInput.push_back("\x1b[2;8H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 7, 1 }));
Log::Comment(NoThrowString().Format(
L"----move to the start of this line (y stays the same)----"));
qExpectedInput.push_back("\r");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 1 }));
});
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Sending the same move across paint calls sends nothing."
L"The cursor's last \"real\" position was 0,0"));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 1 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
Log::Comment(NoThrowString().Format(
L"Paint some text at 0,0, then try moving the cursor to where it currently is."));
qExpectedInput.push_back("\x1b[1C");
qExpectedInput.push_back("asdfghjkl");
const wchar_t* const line = L"asdfghjkl";
const unsigned char rgWidths[] = { 1, 1, 1, 1, 1, 1, 1, 1, 1 };
std::vector<Cluster> clusters;
for (size_t i = 0; i < wcslen(line); i++)
{
clusters.emplace_back(std::wstring_view{ &line[i], 1 }, static_cast<size_t>(rgWidths[i]));
}
Make Conpty emit wrapped lines as actually wrapped lines (#4415) ## Summary of the Pull Request Changes how conpty emits text to preserve line-wrap state, and additionally adds rudimentary support to the Windows Terminal for wrapped lines. ## References * Does _not_ fix (!) #3088, but that might be lower down in conhost. This makes wt behave like conhost, so at least there's that * Still needs a proper deferred EOL wrap implementation in #780, which is left as a todo * #4200 is the mega bucket with all this work * MSFT:16485846 was the first attempt at this task, which caused the regression MSFT:18123777 so we backed it out. * #4403 - I made sure this worked with that PR before I even sent #4403 ## PR Checklist * [x] Closes #405 * [x] Closes #3367 * [x] I work here * [x] Tests added/passed * [n/a] Requires documentation to be updated ## Detailed Description of the Pull Request / Additional comments I started with the following implementation: When conpty is about to write the last column, note that we wrapped this line here. If the next character the vt renderer is told to paint get is supposed to be at the start of the following line, then we know that the previous line had wrapped, so we _won't_ emit the usual `\r\n` here, and we'll just continue emitting text. However, this isn't _exactly_ right - if someone fills the row _exactly_ with text, the information that's available to the vt renderer isn't enough to know for sure if this line broke or not. It is possible for the client to write a full line of text, with a `\n` at the end, to manually break the line. So, I had to also add the `lineWrapped` param to the `IRenderEngine` interface, which is about half the files in this changelist. ## Validation Steps Performed * Ran tests * Checked how the Windows Terminal behaves with these changes * Made sure that conhost/inception and gnome-terminal both act as you'd expect with wrapped lines from conpty
2020-02-27 17:40:11 +01:00
VERIFY_SUCCEEDED(engine->PaintBufferLine({ clusters.data(), clusters.size() }, { 1, 1 }, false, false));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 10, 1 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
});
// Note that only PaintBufferLine updates the "Real" cursor position, which
// the cursor is moved back to at the end of each paint
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Sending the same move across paint calls sends nothing."));
qExpectedInput.push_back(EMPTY_CALLBACK_SENTINEL);
VERIFY_SUCCEEDED(engine->_MoveCursor({ 10, 1 }));
WriteCallback(EMPTY_CALLBACK_SENTINEL, 1);
});
}
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
void VtRendererTest::XtermTestAttributesAcrossReset()
{
BEGIN_TEST_METHOD_PROPERTIES()
TEST_METHOD_PROPERTY(L"Data:renditionAttribute", L"{1, 4, 7}")
END_TEST_METHOD_PROPERTIES()
int renditionAttribute;
VERIFY_SUCCEEDED(TestData::TryGetValue(L"renditionAttribute", renditionAttribute));
std::stringstream renditionSequence;
renditionSequence << "\x1b[" << renditionAttribute << "m";
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
std::unique_ptr<XtermEngine> engine = std::make_unique<XtermEngine>(std::move(hFile), SetUpViewport(), false);
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
RenderData renderData;
Log::Comment(L"Make sure rendition attributes are retained when colors are reset");
Log::Comment(L"----Start With All Attributes Reset----");
TextAttribute textAttributes = {};
qExpectedInput.push_back("\x1b[m");
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
switch (renditionAttribute)
{
case GraphicsOptions::BoldBright:
Log::Comment(L"----Set Bold Attribute----");
textAttributes.SetBold(true);
break;
case GraphicsOptions::Underline:
Log::Comment(L"----Set Underline Attribute----");
textAttributes.SetUnderlined(true);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
break;
case GraphicsOptions::Negative:
Log::Comment(L"----Set Negative Attribute----");
textAttributes.SetReverseVideo(true);
break;
}
qExpectedInput.push_back(renditionSequence.str());
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Set Green Foreground----");
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedForeground(TextColor::DARK_GREEN);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[32m");
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Reset Default Foreground and Retain Rendition----");
textAttributes.SetDefaultForeground();
qExpectedInput.push_back("\x1b[m");
qExpectedInput.push_back(renditionSequence.str());
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Set Green Background----");
Standardize the color table order (#11602) ## Summary of the Pull Request In the original implementation, we used two different orderings for the color tables. The WT color table used ANSI order, while the conhost color table used a Windows-specific order. This PR standardizes on the ANSI color order everywhere, so the usage of indexed colors is consistent across both parts of the code base, which will hopefully allow more of the code to be shared one day. ## References This is another small step towards de-duplicating `AdaptDispatch` and `TerminalDispatch` for issue #3849, and is essentially a followup to the SGR dispatch refactoring in PR #6728. ## PR Checklist * [x] Closes #11461 * [x] CLA signed. * [x] Tests added/passed * [ ] Documentation updated. * [ ] Schema updated. * [x] I've discussed this with core contributors already. Issue number where discussion took place: #11461 ## Detailed Description of the Pull Request / Additional comments Conhost still needs to deal with legacy attributes using Windows color order, so those values now need to be transposed to ANSI colors order when creating a `TextAttribute` object. This is done with a simple mapping table, which also handles the translation of the default color entries, so it's actually slightly faster than the original code. And when converting `TextAttribute` values back to legacy console attributes, we were already using a mapping table to handle the narrowing of 256-color values down to 16 colors, so we just needed to adjust that table to account for the translation from ANSI to Windows, and then could make use of the same table for both 256-color and 16-color values. There are also a few places in conhost that read from or write to the color tables, and those now need to transpose the index values. I've addressed this by creating separate `SetLegacyColorTableEntry` and `GetLegacyColorTableEntry` methods in the `Settings` class which take care of the mapping, so it's now clearer in which cases the code is dealing with legacy values, and which are ANSI values. These methods are used in the `SetConsoleScreenBufferInfoEx` and `GetConsoleScreenBufferInfoEx` APIs, as well as a few place where color preferences are handled (the registry, shortcut links, and the properties dialog), none of which are particularly sensitive to performance. However, we also use the legacy table when looking up the default colors for rendering (which happens a lot), so I've refactored that code so the default color calculations now only occur once per frame. The plus side of all of this is that the VT code doesn't need to do the index translation anymore, so we can finally get rid of all the calls to `XTermToWindowsIndex`, and we no longer need a separate color table initialization method for conhost, so I was able to merge a number of color initialization methods into one. We also no longer need to translate from legacy values to ANSI when generating VT sequences for conpty. The one exception to that is the 16-color VT renderer, which uses the `TextColor::GetLegacyIndex` method to approximate 16-color equivalents for RGB and 256-color values. Since that method returns a legacy index, it still needs to be translated to ANSI before it can be used in a VT sequence. But this should be no worse than it was before. One more special case is conhost's secret _Color Selection_ feature. That uses `Ctrl`+Number and `Alt`+Number key sequences to highlight parts of the buffer, and the mapping from number to color is based on the Windows color order. So that mapping now needs to be transposed, but that's also not performance sensitive. The only thing that I haven't bothered to update is the trace logging code in the `Telemetry` class, which logs the first 16 entries in the color table. Those entries are now going to be in a different order, but I didn't think that would be of great concern to anyone. ## Validation Steps Performed A lot of unit tests needed to be updated to use ANSI color constants when setting indexed colors, where before they might have been expecting values in Windows order. But this replaced a wild mix of different constants, sometimes having to use bit shifting, as well as values mapped with `XTermToWindowsIndex`, so I think the tests are a whole lot clearer now. Only a few cases have been left with literal numbers where that seemed more appropriate. In addition to getting the unit tests working, I've also manually tested the behaviour of all the console APIs which I thought could be affected by these changes, and confirmed that they produced the same results in the new code as they did in the original implementation. This includes: - `WriteConsoleOutput` - `ReadConsoleOutput` - `SetConsoleTextAttribute` with `WriteConsoleOutputCharacter` - `FillConsoleOutputAttribute` and `FillConsoleOutputCharacter` - `ScrollConsoleScreenBuffer` - `GetConsoleScreenBufferInfo` - `GetConsoleScreenBufferInfoEx` - `SetConsoleScreenBufferInfoEx` I've also manually tested changing colors via the console properties menu, the registry, and shortcut links, including setting default colors and popup colors. And I've tested that the "Quirks Mode" is still working as expected in PowerShell. In terms of performance, I wrote a little test app that filled a 80x9999 buffer with random color combinations using `WriteConsoleOutput`, which I figured was likely to be the most performance sensitive call, and I think it now actually performs slightly better than the original implementation. I've also tested similar code - just filling the visible window - with SGR VT sequences of various types, and the performance seems about the same as it was before.
2021-11-04 23:13:22 +01:00
textAttributes.SetIndexedBackground(TextColor::DARK_GREEN);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
qExpectedInput.push_back("\x1b[42m");
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
Log::Comment(L"----Reset Default Background and Retain Rendition----");
textAttributes.SetDefaultBackground();
qExpectedInput.push_back("\x1b[m");
qExpectedInput.push_back(renditionSequence.str());
Add support for downloadable soft fonts (#10011) This PR adds conhost support for downloadable soft fonts - also known as dynamically redefinable character sets (DRCS) - using the `DECDLD` escape sequence. These fonts are typically designed to work on a specific terminal model, and each model tends to have a different character cell size. So in order to support as many models as possible, the code attempts to detect the original target size of the font, and then scale the glyphs to fit our current cell size. Once a font has been downloaded to the terminal, it can be designated in the same way you would a standard character set, using an `SCS` escape sequence. The identification string for the set is defined by the `DECDLD` sequence. Internally we map the characters in this set to code points `U+EF20` to `U+EF7F` in the Unicode private use are (PUA). Then in the renderer, any characters in that range are split off into separate runs, which get painted with a special font. The font itself is dynamically generated as an in-memory resource, constructed from the downloaded character bitmaps which have been scaled to the appropriate size. If no soft fonts are in use, then no mapping of the PUA code points will take place, so this shouldn't interfere with anyone using those code points for something else, as along as they aren't also trying to use soft fonts. I also tried to pick a PUA range that hadn't already been snatched up by Nerd Fonts, but if we do receive reports of a conflict, it's easy enough to change. ## Validation Steps Performed I added an adapter test that runs through a bunch of parameter variations for the `DECDLD` sequence, to make sure we're correctly detecting the font sizes for most of the known DEC terminal models. I've also tested manually on a wide range of existing fonts, of varying dimensions, and from multiple sources, and made sure they all worked reasonably well. Closes #9164
2021-08-06 22:41:02 +02:00
VERIFY_SUCCEEDED(engine->UpdateDrawingBrushes(textAttributes, &renderData, false, false));
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
VerifyExpectedInputsDrained();
}
void VtRendererTest::TestWrapping()
{
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
std::unique_ptr<Xterm256Engine> engine = std::make_unique<Xterm256Engine>(std::move(hFile), SetUpViewport());
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
});
Viewport view = SetUpViewport();
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Make sure the cursor is at 0,0"));
qExpectedInput.push_back("\x1b[H");
VERIFY_SUCCEEDED(engine->_MoveCursor({ 0, 0 }));
});
TestPaint(*engine, [&]() {
Log::Comment(NoThrowString().Format(
L"Painting a line that wrapped, then painting another line, and "
L"making sure we don't manually move the cursor between those paints."));
qExpectedInput.push_back("asdfghjkl");
// TODO: Undoing this behavior due to 18123777. Will come back in MSFT:16485846
qExpectedInput.push_back("\r\n");
qExpectedInput.push_back("zxcvbnm,.");
const wchar_t* const line1 = L"asdfghjkl";
const wchar_t* const line2 = L"zxcvbnm,.";
const unsigned char rgWidths[] = { 1, 1, 1, 1, 1, 1, 1, 1, 1 };
std::vector<Cluster> clusters1;
for (size_t i = 0; i < wcslen(line1); i++)
{
clusters1.emplace_back(std::wstring_view{ &line1[i], 1 }, static_cast<size_t>(rgWidths[i]));
}
std::vector<Cluster> clusters2;
for (size_t i = 0; i < wcslen(line2); i++)
{
clusters2.emplace_back(std::wstring_view{ &line2[i], 1 }, static_cast<size_t>(rgWidths[i]));
}
Make Conpty emit wrapped lines as actually wrapped lines (#4415) ## Summary of the Pull Request Changes how conpty emits text to preserve line-wrap state, and additionally adds rudimentary support to the Windows Terminal for wrapped lines. ## References * Does _not_ fix (!) #3088, but that might be lower down in conhost. This makes wt behave like conhost, so at least there's that * Still needs a proper deferred EOL wrap implementation in #780, which is left as a todo * #4200 is the mega bucket with all this work * MSFT:16485846 was the first attempt at this task, which caused the regression MSFT:18123777 so we backed it out. * #4403 - I made sure this worked with that PR before I even sent #4403 ## PR Checklist * [x] Closes #405 * [x] Closes #3367 * [x] I work here * [x] Tests added/passed * [n/a] Requires documentation to be updated ## Detailed Description of the Pull Request / Additional comments I started with the following implementation: When conpty is about to write the last column, note that we wrapped this line here. If the next character the vt renderer is told to paint get is supposed to be at the start of the following line, then we know that the previous line had wrapped, so we _won't_ emit the usual `\r\n` here, and we'll just continue emitting text. However, this isn't _exactly_ right - if someone fills the row _exactly_ with text, the information that's available to the vt renderer isn't enough to know for sure if this line broke or not. It is possible for the client to write a full line of text, with a `\n` at the end, to manually break the line. So, I had to also add the `lineWrapped` param to the `IRenderEngine` interface, which is about half the files in this changelist. ## Validation Steps Performed * Ran tests * Checked how the Windows Terminal behaves with these changes * Made sure that conhost/inception and gnome-terminal both act as you'd expect with wrapped lines from conpty
2020-02-27 17:40:11 +01:00
VERIFY_SUCCEEDED(engine->PaintBufferLine({ clusters1.data(), clusters1.size() }, { 0, 0 }, false, false));
VERIFY_SUCCEEDED(engine->PaintBufferLine({ clusters2.data(), clusters2.size() }, { 0, 1 }, false, false));
});
}
void VtRendererTest::TestResize()
{
Viewport view = SetUpViewport();
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
auto engine = std::make_unique<Xterm256Engine>(std::move(hFile), view);
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear and go home
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
VERIFY_IS_TRUE(engine->_suppressResizeRepaint);
// The renderer (in Renderer@_PaintFrameForEngine..._CheckViewportAndScroll)
// will manually call UpdateViewport once before actually painting the
// first frame. Replicate that behavior here
VERIFY_SUCCEEDED(engine->UpdateViewport(view.ToInclusive()));
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_firstPaint);
VERIFY_IS_FALSE(engine->_suppressResizeRepaint);
});
// Resize the viewport to 120x30
// Everything should be invalidated, and a resize message sent.
const auto newView = Viewport::FromDimensions({ 0, 0 }, { 120, 30 });
qExpectedInput.push_back("\x1b[8;30;120t");
VERIFY_SUCCEEDED(engine->UpdateViewport(newView.ToInclusive()));
TestPaint(*engine, [&]() {
Move ConPTY to use til::bitmap (#5024) ## Summary of the Pull Request Moves the ConPTY drawing mechanism (`VtRenderer`) to use the fine-grained `til::bitmap` individual-dirty-bit tracking mechanism instead of coarse-grained rectangle unions to improve drawing performance by dramatically reducing the total area redrawn. ## PR Checklist * [x] Part of #778 and #1064 * [x] I work here * [x] Tests added and updated. * [x] I'm a core contributor ## Detailed Description of the Pull Request / Additional comments - Converted `GetDirtyArea()` interface from `IRenderEngine` to use a vector of `til::rectangle` instead of the `SMALL_RECT` to banhammer inclusive rectangles. - `VtEngine` now holds and operates on the `til::bitmap` for invalidation regions. All invalidation operation functions that used to be embedded inside `VtEngine` are deleted in favor of using the ones in `til::bitmap`. - Updated `VtEngine` tracing to use new `til::bitmap` on trace and the new `to_string()` methods detailed below. - Comparison operators for `til::bitmap` and complementary tests. - Fixed an issue where the dirty rectangle shortcut in `til::bitmap` was set to 0,0,0,0 by default which means that `|=` on it with each `set()` operation was stretching the rectangle from 0,0. Now it's a `std::optional` so it has no value after just being cleared and will build from whatever the first invalidated rectangle is. Complementary tests added. - Optional run caching for `til::bitmap` in the `runs()` method since both VT and DX renderers will likely want to generate the set of runs at the beginning of a frame and refer to them over and over through that frame. Saves the iteration and creation and caches inside `til::bitmap` where the chance of invalidation of the underlying data is known best. It is still possible to iterate manually with `begin()` and `end()` from the outside without caching, if desired. Complementary tests added. - WEX templates added for `til::bitmap` and used in tests. - `translate()` method for `til::bitmap` which will slide the dirty points in the direction specified by a `til::point` and optionally back-fill the uncovered area as dirty. Complementary tests added. - Moves all string generation for `til` types `size`, `point`, `rectangle`, and `some` into a `to_string` method on each object such that it can be used in both ETW tracing scenarios AND in the TAEF templates uniformly. Adds a similar method for `bitmap`. - Add tagging to `_bitmap_const_iterator` such that it appears as a valid **Input Iterator** to STL collections and can be used in a `std::vector` constructor as a range. Adds and cleans up operators on this iterator to match the theoretical requirements for an **Input Iterator**. Complementary tests added. - Add loose operators to `til` which will allow some basic math operations (+, -, *, /) between `til::size` and `til::point` and vice versa. Complementary tests added. Complementary tests added. - Adds operators to `til::rectangle` to allow scaling with basic math operations (+, -, *) versus `til::size` and translation with basic math operations (+, -) against `til::point`. Complementary tests added. - In-place variants of some operations added to assorted `til` objects. Complementary tests added. - Update VT tests to compare invalidation against the new map structure instead of raw rectangles where possible. ## Validation Steps Performed - Wrote additional til Unit Tests for all additional operators and functions added to the project to support this operation - Updated the existing VT renderer tests - Ran perf check
2020-03-23 16:57:54 +01:00
VERIFY_IS_TRUE(engine->_invalidMap.all());
VERIFY_IS_FALSE(engine->_firstPaint);
VERIFY_IS_FALSE(engine->_suppressResizeRepaint);
});
}
void VtRendererTest::TestCursorVisibility()
{
Viewport view = SetUpViewport();
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
auto engine = std::make_unique<Xterm256Engine>(std::move(hFile), view);
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
// Verify the first paint emits a clear
qExpectedInput.push_back("\x1b[2J");
VERIFY_IS_TRUE(engine->_firstPaint);
VERIFY_IS_FALSE(engine->_lastCursorIsVisible);
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
TestPaint(*engine, [&]() {
// During StartPaint, we'll mark the cursor as off. make sure that happens.
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_firstPaint);
});
// The cursor wasn't painted in the last frame.
VERIFY_IS_FALSE(engine->_lastCursorIsVisible);
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
COORD origin{ 0, 0 };
VERIFY_ARE_NOT_EQUAL(origin, engine->_lastText);
CursorOptions options{};
options.coordCursor = origin;
// Frame 1: Paint the cursor at the home position. At the end of the frame,
// the cursor should be on. Because we're moving the cursor with CUP, we
// need to disable the cursor during this frame.
TestPaint(*engine, [&]() {
VERIFY_IS_FALSE(engine->_lastCursorIsVisible);
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
Log::Comment(NoThrowString().Format(L"Make sure the cursor is at 0,0"));
qExpectedInput.push_back("\x1b[H");
VERIFY_SUCCEEDED(engine->PaintCursor(options));
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
VERIFY_IS_TRUE(engine->_needToDisableCursor);
qExpectedInput.push_back("\x1b[?25h");
});
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
// Frame 2: Paint the cursor again at the home position. At the end of the
// frame, the cursor should be on, the same as before. We aren't moving the
// cursor during this frame, so _needToDisableCursor will stay false.
TestPaint(*engine, [&]() {
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
Log::Comment(NoThrowString().Format(L"If we just paint the cursor again at the same position, the cursor should not need to be disabled"));
VERIFY_SUCCEEDED(engine->PaintCursor(options));
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
});
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
// Frame 3: Paint the cursor at 2,2. At the end of the frame, the cursor
// should be on, the same as before. Because we're moving the cursor with
// CUP, we need to disable the cursor during this frame.
TestPaint(*engine, [&]() {
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
Log::Comment(NoThrowString().Format(L"Move the cursor to 2,2"));
qExpectedInput.push_back("\x1b[3;3H");
options.coordCursor = { 2, 2 };
VERIFY_SUCCEEDED(engine->PaintCursor(options));
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
VERIFY_IS_TRUE(engine->_needToDisableCursor);
// Because _needToDisableCursor is true, we'll insert a ?25l at the
// start of the frame. Unfortunately, we can't test to make sure that
// it's there, but we can ensure that the matching ?25h is printed:
qExpectedInput.push_back("\x1b[?25h");
});
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_TRUE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
// Frame 4: Don't paint the cursor. At the end of the frame, the cursor
// should be off.
Log::Comment(NoThrowString().Format(L"Painting without calling PaintCursor will hide the cursor"));
TestPaint(*engine, [&]() {
VERIFY_IS_TRUE(engine->_lastCursorIsVisible);
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
qExpectedInput.push_back("\x1b[?25l");
});
VERIFY_IS_FALSE(engine->_lastCursorIsVisible);
VERIFY_IS_FALSE(engine->_nextCursorIsVisible);
VERIFY_IS_FALSE(engine->_needToDisableCursor);
}
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
void VtRendererTest::FormattedString()
{
// This test works with a static cache variable that
// can be affected by other tests
BEGIN_TEST_METHOD_PROPERTIES()
TEST_METHOD_PROPERTY(L"IsolationLevel", L"Method")
END_TEST_METHOD_PROPERTIES();
static const auto format = FMT_COMPILE("\x1b[{}m");
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
const auto value = 12;
Viewport view = SetUpViewport();
wil::unique_hfile hFile = wil::unique_hfile(INVALID_HANDLE_VALUE);
Improve the propagation of color attributes over ConPTY (#6506) This PR reimplements the VT rendering engines to do a better job of preserving the original color types when propagating attributes over ConPTY. For the 16-color renderers it provides better support for default colors and improves the efficiency of the color narrowing conversions. It also fixes problems with the ordering of character renditions that could result in attributes being dropped. Originally the base renderer would calculate the RGB color values and legacy/extended attributes up front, passing that data on to the active engine's `UpdateDrawingBrushes` method. With this new implementation, the renderer now just passes through the original `TextAttribute` along with an `IRenderData` interface, and leaves it to the engines to extract the information they need. The GDI and DirectX engines now have to lookup the RGB colors themselves (via simple `IRenderData` calls), but have no need for the other attributes. The VT engines extract the information that they need from the `TextAttribute`, instead of having to reverse engineer it from `COLORREF`s. The process for the 256-color Xterm engine starts with a check for default colors. If both foreground and background are default, it outputs a SGR 0 reset, and clears the `_lastTextAttribute` completely to make sure any reset state is reapplied. With that out the way, the foreground and background are updated (if changed) in one of 4 ways. They can either be a default value (SGR 39 and 49), a 16-color index (using ANSI or AIX sequences), a 256-color index, or a 24-bit RGB value (both using SGR 38 and 48 sequences). Then once the colors are accounted for, there is a separate step that handles the character rendition attributes (bold, italics, underline, etc.) This step must come _after_ the color sequences, in case a SGR reset is required, which would otherwise have cleared any character rendition attributes if it came last (which is what happened in the original implementation). The process for the 16-color engines is a little different. The target client in this case (Windows telnet) is incapable of setting default colors individually, so we need to output an SGR 0 reset if _either_ color has changed to default. With that out the way, we use the `TextColor::GetLegacyIndex` method to obtain an approximate 16-color index for each color, and apply the bold attribute by brightening the foreground index (setting bit 8) if the color type permits that. However, since Windows telnet only supports the 8 basic ANSI colors, the best we can do for bright colors is to output an SGR 1 attribute to get a bright foreground. There is nothing we can do about a bright background, so after that we just have to drop the high bit from the colors. If the resulting index values have changed from what they were before, we then output ANSI 8-color SGR sequences to update them. As with the 256-color engine, there is also a final step to handle the character rendition attributes. But in this case, the only supported attributes are underline and reversed video. Since the VT engines no longer depend on the active color table and default color values, there was quite a lot of code that could now be removed. This included the `IDefaultColorProvider` interface and implementations, the `Find(Nearest)TableIndex` functions, and also the associated HLS conversion and difference calculations. VALIDATION Other than simple API parameter changes, the majority of updates required in the unit tests were to correct assumptions about the way the colors should be rendered, which were the source of the narrowing bugs this PR was trying to fix. Like passing white on black to the `UpdateDrawingBrushes` API, and expecting it to output the default `SGR 0` sequence, or passing an RGB color and expecting an indexed SGR sequence. In addition to that, I've added some VT renderer tests to make sure the rendition attributes (bold, underline, etc) are correctly retained when a default color update causes an `SGR 0` sequence to be generated (the source of bug #3076). And I've extended the VT renderer color tests (both 256-color and 16-color) to make sure we're covering all of the different color types (default, RGB, and both forms of indexed colors). I've also tried to manually verify that all of the test cases in the linked bug reports (and their associated duplicates) are now fixed when this PR is applied. Closes #2661 Closes #3076 Closes #3717 Closes #5384 Closes #5864 This is only a partial fix for #293, but I suspect the remaining cases are unfixable.
2020-07-01 20:10:36 +02:00
auto engine = std::make_unique<Xterm256Engine>(std::move(hFile), view);
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
auto pfn = std::bind(&VtRendererTest::WriteCallback, this, std::placeholders::_1, std::placeholders::_2);
engine->SetTestCallback(pfn);
Log::Comment(L"1.) Write it once. It should resize itself.");
qExpectedInput.push_back("\x1b[12m");
VERIFY_SUCCEEDED(engine->_WriteFormatted(format, value));
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
Log::Comment(L"2.) Write the same thing again, should be fine.");
qExpectedInput.push_back("\x1b[12m");
VERIFY_SUCCEEDED(engine->_WriteFormatted(format, value));
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
Log::Comment(L"3.) Now write something huge. Should resize itself and still be fine.");
static const auto bigFormat = FMT_COMPILE("\x1b[28;3;{};{};{}m");
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
const auto bigValue = 500;
qExpectedInput.push_back("\x1b[28;3;500;500;500m");
VERIFY_SUCCEEDED(engine->_WriteFormatted(bigFormat, bigValue, bigValue, bigValue));
Use estimated formatted lengths to optimize performance of VT rendering (#4890) ## Summary of the Pull Request Allows VT engine methods that print formatted strings (cursor movements, color changes, etc.) to provide a guess at the max buffer size required eliminating the double-call for formatting in the common case. ## PR Checklist * [x] Found while working on #778 * [x] CLA signed. If not, go over [here](https://cla.opensource.microsoft.com/microsoft/Terminal) and sign the CLA * [x] Tests added/passed * [x] Am core contributor. ## Detailed Description of the Pull Request / Additional comments - The most common case for VT rendering is formatting a few numbers into a sequence. For the most part, we can already tell the maximum length that the string could be based on the number of substitutions and the size of the parameters. - The existing formatting method would always double-call. It would first call for how long the string was going to be post-formatting, allocate that memory, then call again and fill it up. This cost two full times of running through the string to find a length we probably already knew for the most part. - Now if a size is provided, we allocate that first and attempt the "second pass" of formatting directly into the buffer. This saves the count step in the common case. - If this fails, we fall back and do the two-pass method (which theoretically means the bad case is now 3 passes.) - The next biggest waste of time in this method was allocating and freeing strings for every format pass. Due to the nature of the VT renderer, many things need to be formatted this way. I've now instead moved the format method to hold a static string that really only grows over the course of the session for all of these format operations. I expect a majority of the time, it will only be consuming approximately 5-15 length of a std::string of memory space. I cannot currently see a circumstance where it would use more than that, but I'm consciously trading memory usage when running as a PTY for overall runtime performance here. ## Validation Steps Performed - Ran the thing manually and checked it out with wsl and cmatrix and Powershell and such attached to Terminal - Wrote and ran automated tests on formatting method
2020-03-11 23:12:25 +01:00
}