pulumi/sdk/proto/go/analyzer.pb.go

272 lines
9.1 KiB
Go
Raw Normal View History

General prep work for refresh This change includes a bunch of refactorings I made in prep for doing refresh (first, the command, see pulumi/pulumi#1081): * The primary change is to change the way the engine's core update functionality works with respect to deploy.Source. This is the way we can plug in new sources of resource information during planning (and, soon, diffing). The way I intend to model refresh is by having a new kind of source, deploy.RefreshSource, which will let us do virtually everything about an update/diff the same way with refreshes, which avoid otherwise duplicative effort. This includes changing the planOptions (nee deployOptions) to take a new SourceFunc callback, which is responsible for creating a source specific to the kind of plan being requested. Preview, Update, and Destroy now are primarily differentiated by the kind of deploy.Source that they return, rather than sprinkling things like `if Destroying` throughout. This tidies up some logic and, more importantly, gives us precisely the refresh hook we need. * Originally, we used the deploy.NullSource for Destroy operations. This simply returns nothing, which is how Destroy works. For some reason, we were no longer doing this, and instead had some `if Destroying` cases sprinkled throughout the deploy.EvalSource. I think this is a vestige of some old way we did configuration, at least judging by a comment, which is apparently no longer relevant. * Move diff and diff-printing logic within the engine into its own pkg/engine/diff.go file, to prepare for upcoming work. * I keep noticing benign diffs anytime I regenerate protobufs. I suspect this is because we're also on different versions. I changed generate.sh to also dump the version into grpc_version.txt. At least we can understand where the diffs are coming from, decide whether to take them (i.e., a newer version), and ensure that as a team we are monotonically increasing, and not going backwards. * I also tidied up some tiny things I noticed while in there, like comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
// Code generated by protoc-gen-go.
// source: analyzer.proto
General prep work for refresh This change includes a bunch of refactorings I made in prep for doing refresh (first, the command, see pulumi/pulumi#1081): * The primary change is to change the way the engine's core update functionality works with respect to deploy.Source. This is the way we can plug in new sources of resource information during planning (and, soon, diffing). The way I intend to model refresh is by having a new kind of source, deploy.RefreshSource, which will let us do virtually everything about an update/diff the same way with refreshes, which avoid otherwise duplicative effort. This includes changing the planOptions (nee deployOptions) to take a new SourceFunc callback, which is responsible for creating a source specific to the kind of plan being requested. Preview, Update, and Destroy now are primarily differentiated by the kind of deploy.Source that they return, rather than sprinkling things like `if Destroying` throughout. This tidies up some logic and, more importantly, gives us precisely the refresh hook we need. * Originally, we used the deploy.NullSource for Destroy operations. This simply returns nothing, which is how Destroy works. For some reason, we were no longer doing this, and instead had some `if Destroying` cases sprinkled throughout the deploy.EvalSource. I think this is a vestige of some old way we did configuration, at least judging by a comment, which is apparently no longer relevant. * Move diff and diff-printing logic within the engine into its own pkg/engine/diff.go file, to prepare for upcoming work. * I keep noticing benign diffs anytime I regenerate protobufs. I suspect this is because we're also on different versions. I changed generate.sh to also dump the version into grpc_version.txt. At least we can understand where the diffs are coming from, decide whether to take them (i.e., a newer version), and ensure that as a team we are monotonically increasing, and not going backwards. * I also tidied up some tiny things I noticed while in there, like comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
// DO NOT EDIT!
/*
Package pulumirpc is a generated protocol buffer package.
It is generated from these files:
analyzer.proto
engine.proto
errors.proto
Switch to parent pointers; display components nicely This change switches from child lists to parent pointers, in the way resource ancestries are represented. This cleans up a fair bit of the old parenting logic, including all notion of ambient parent scopes (and will notably address pulumi/pulumi#435). This lets us show a more parent/child display in the output when doing planning and updating. For instance, here is an update of a lambda's text, which is logically part of a cloud timer: * cloud:timer:Timer: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:timer:Timer::lm-cts-malta-job-CleanSnapshots] * cloud:function:Function: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:function:Function::lm-cts-malta-job-CleanSnapshots] * aws:serverless:Function: (same) [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots] ~ aws:lambda/function:Function: (modify) [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741] [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots] - code : archive(assets:2092f44) { // etc etc etc Note that we still get walls of text, but this will be actually quite nice when combined with pulumi/pulumi#454. I've also suppressed printing properties that didn't change during updates when --detailed was not passed, and also suppressed empty strings and zero-length arrays (since TF uses these as defaults in many places and it just makes creation and deletion quite verbose). Note that this is a far cry from everything we can possibly do here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417). But it's a good start towards taming some of our output spew.
2017-11-17 03:21:41 +01:00
language.proto
plugin.proto
provider.proto
Switch to parent pointers; display components nicely This change switches from child lists to parent pointers, in the way resource ancestries are represented. This cleans up a fair bit of the old parenting logic, including all notion of ambient parent scopes (and will notably address pulumi/pulumi#435). This lets us show a more parent/child display in the output when doing planning and updating. For instance, here is an update of a lambda's text, which is logically part of a cloud timer: * cloud:timer:Timer: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:timer:Timer::lm-cts-malta-job-CleanSnapshots] * cloud:function:Function: (same) [urn=urn:pulumi:malta::lm-cloud::cloud:function:Function::lm-cts-malta-job-CleanSnapshots] * aws:serverless:Function: (same) [urn=urn:pulumi:malta::lm-cloud::aws:serverless:Function::lm-cts-malta-job-CleanSnapshots] ~ aws:lambda/function:Function: (modify) [id=lm-cts-malta-job-CleanSnapshots-fee4f3bf41280741] [urn=urn:pulumi:malta::lm-cloud::aws:lambda/function:Function::lm-cts-malta-job-CleanSnapshots] - code : archive(assets:2092f44) { // etc etc etc Note that we still get walls of text, but this will be actually quite nice when combined with pulumi/pulumi#454. I've also suppressed printing properties that didn't change during updates when --detailed was not passed, and also suppressed empty strings and zero-length arrays (since TF uses these as defaults in many places and it just makes creation and deletion quite verbose). Note that this is a far cry from everything we can possibly do here as part of pulumi/pulumi#340 (and even pulumi/pulumi#417). But it's a good start towards taming some of our output spew.
2017-11-17 03:21:41 +01:00
resource.proto
It has these top-level messages:
AnalyzeRequest
AnalyzeResponse
AnalyzeFailure
LogRequest
ErrorCause
GetRequiredPluginsRequest
GetRequiredPluginsResponse
Implement initial Lumi-as-a-library This is the initial step towards redefining Lumi as a library that runs atop vanilla Node.js/V8, rather than as its own runtime. This change is woefully incomplete but this includes some of the more stable pieces of my current work-in-progress. The new structure is that within the sdk/ directory we will have a client library per language. This client library contains the object model for Lumi (resources, properties, assets, config, etc), in addition to the "language runtime host" components required to interoperate with the Lumi resource monitor. This resource monitor is effectively what we call "Lumi" today, in that it's the thing orchestrating plans and deployments. Inside the sdk/ directory, you will find nodejs/, the Node.js client library, alongside proto/, the definitions for RPC interop between the different pieces of the system. This includes existing RPC definitions for resource providers, etc., in addition to the new ones for hosting different language runtimes from within Lumi. These new interfaces are surprisingly simple. There is effectively a bidirectional RPC channel between the Lumi resource monitor, represented by the lumirpc.ResourceMonitor interface, and each language runtime, represented by the lumirpc.LanguageRuntime interface. The overall orchestration goes as follows: 1) Lumi decides it needs to run a program written in language X, so it dynamically loads the language runtime plugin for language X. 2) Lumi passes that runtime a loopback address to its ResourceMonitor service, while language X will publish a connection back to its LanguageRuntime service, which Lumi will talk to. 3) Lumi then invokes LanguageRuntime.Run, passing information like the desired working directory, program name, arguments, and optional configuration variables to make available to the program. 4) The language X runtime receives this, unpacks it and sets up the necessary context, and then invokes the program. The program then calls into Lumi object model abstractions that internally communicate back to Lumi using the ResourceMonitor interface. 5) The key here is ResourceMonitor.NewResource, which Lumi uses to serialize state about newly allocated resources. Lumi receives these and registers them as part of the plan, doing the usual diffing, etc., to decide how to proceed. This interface is perhaps one of the most subtle parts of the new design, as it necessitates the use of promises internally to allow parallel evaluation of the resource plan, letting dataflow determine the available concurrency. 6) The program exits, and Lumi continues on its merry way. If the program fails, the RunResponse will include information about the failure. Due to (5), all properties on resources are now instances of a new Property<T> type. A Property<T> is just a thin wrapper over a T, but it encodes the special properties of Lumi resource properties. Namely, it is possible to create one out of a T, other Property<T>, Promise<T>, or to freshly allocate one. In all cases, the Property<T> does not "settle" until its final state is known. This cannot occur before the deployment actually completes, and so in general it's not safe to depend on concrete resolutions of values (unlike ordinary Promise<T>s which are usually expected to resolve). As a result, all derived computations are meant to use the `then` function (as in `someValue.then(v => v+x)`). Although this change includes tests that may be run in isolation to test the various RPC interactions, we are nowhere near finished. The remaining work primarily boils down to three things: 1) Wiring all of this up to the Lumi code. 2) Fixing the handful of known loose ends required to make this work, primarily around the serialization of properties (waiting on unresolved ones, serializing assets properly, etc). 3) Implementing lambda closure serialization as a native extension. This ongoing work is part of pulumi/pulumi-fabric#311.
2017-08-26 21:07:54 +02:00
RunRequest
RunResponse
PluginInfo
PluginDependency
ConfigureRequest
InvokeRequest
InvokeResponse
CheckRequest
CheckResponse
CheckFailure
DiffRequest
DiffResponse
CreateRequest
CreateResponse
Initial support for output properties (1 of 3) This change includes approximately 1/3rd of the change necessary to support output properties, as per pulumi/lumi#90. In short, the runtime now has a new hidden type, Latent<T>, which represents a "speculative" value, whose eventual type will be T, that we can use during evaluation in various ways. Namely, operations against Latent<T>s generally produce new Latent<U>s. During planning, any Latent<T>s that end up in resource properties are transformed into "unknown" property values. An unknown property value is legal only during planning-time activities, such as Check, Name, and InspectChange. As a result, those RPC interfaces have been updated to include lookaside maps indicating which properties have unknown values. My intent is to add some helper functions to make dealing with this circumstance more correct-by-construction. For now, using an unresolved Latent<T> in a conditional will lead to an error. See pulumi/lumi#67. Speculating beyond these -- by supporting iterative planning and application -- is something we want to support eventually, but it makes sense to do that as an additive change beyond this initial support. That is a missing 1/3. Finally, the other missing 1/3rd which will happen much sooner than the rest is restructuing plan application so that it will correctly observe resolution of Latent<T> values. Right now, the evaluation happens in one single pass, prior to the application, and so Latent<T>s never actually get witnessed in a resolved state.
2017-05-24 02:32:59 +02:00
UpdateRequest
UpdateResponse
DeleteRequest
RegisterResourceRequest
RegisterResourceResponse
RegisterResourceOutputsRequest
*/
package pulumirpc
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/empty"
import google_protobuf1 "github.com/golang/protobuf/ptypes/struct"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type AnalyzeRequest struct {
Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
Properties *google_protobuf1.Struct `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *AnalyzeRequest) Reset() { *m = AnalyzeRequest{} }
func (m *AnalyzeRequest) String() string { return proto.CompactTextString(m) }
func (*AnalyzeRequest) ProtoMessage() {}
func (*AnalyzeRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *AnalyzeRequest) GetType() string {
if m != nil {
return m.Type
}
return ""
}
func (m *AnalyzeRequest) GetProperties() *google_protobuf1.Struct {
if m != nil {
return m.Properties
}
return nil
}
type AnalyzeResponse struct {
Failures []*AnalyzeFailure `protobuf:"bytes,1,rep,name=failures" json:"failures,omitempty"`
}
func (m *AnalyzeResponse) Reset() { *m = AnalyzeResponse{} }
func (m *AnalyzeResponse) String() string { return proto.CompactTextString(m) }
func (*AnalyzeResponse) ProtoMessage() {}
func (*AnalyzeResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *AnalyzeResponse) GetFailures() []*AnalyzeFailure {
if m != nil {
return m.Failures
}
return nil
}
type AnalyzeFailure struct {
Property string `protobuf:"bytes,1,opt,name=property" json:"property,omitempty"`
Reason string `protobuf:"bytes,2,opt,name=reason" json:"reason,omitempty"`
}
func (m *AnalyzeFailure) Reset() { *m = AnalyzeFailure{} }
func (m *AnalyzeFailure) String() string { return proto.CompactTextString(m) }
func (*AnalyzeFailure) ProtoMessage() {}
func (*AnalyzeFailure) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *AnalyzeFailure) GetProperty() string {
if m != nil {
return m.Property
}
return ""
}
func (m *AnalyzeFailure) GetReason() string {
if m != nil {
return m.Reason
}
return ""
}
func init() {
proto.RegisterType((*AnalyzeRequest)(nil), "pulumirpc.AnalyzeRequest")
proto.RegisterType((*AnalyzeResponse)(nil), "pulumirpc.AnalyzeResponse")
proto.RegisterType((*AnalyzeFailure)(nil), "pulumirpc.AnalyzeFailure")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for Analyzer service
type AnalyzerClient interface {
// Analyze analyzes a single resource object, and returns any errors that it finds.
Analyze(ctx context.Context, in *AnalyzeRequest, opts ...grpc.CallOption) (*AnalyzeResponse, error)
// GetPluginInfo returns generic information about this plugin, like its version.
GetPluginInfo(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*PluginInfo, error)
}
type analyzerClient struct {
cc *grpc.ClientConn
}
func NewAnalyzerClient(cc *grpc.ClientConn) AnalyzerClient {
return &analyzerClient{cc}
}
func (c *analyzerClient) Analyze(ctx context.Context, in *AnalyzeRequest, opts ...grpc.CallOption) (*AnalyzeResponse, error) {
out := new(AnalyzeResponse)
err := grpc.Invoke(ctx, "/pulumirpc.Analyzer/Analyze", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *analyzerClient) GetPluginInfo(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*PluginInfo, error) {
out := new(PluginInfo)
err := grpc.Invoke(ctx, "/pulumirpc.Analyzer/GetPluginInfo", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for Analyzer service
type AnalyzerServer interface {
// Analyze analyzes a single resource object, and returns any errors that it finds.
Analyze(context.Context, *AnalyzeRequest) (*AnalyzeResponse, error)
// GetPluginInfo returns generic information about this plugin, like its version.
GetPluginInfo(context.Context, *google_protobuf.Empty) (*PluginInfo, error)
}
func RegisterAnalyzerServer(s *grpc.Server, srv AnalyzerServer) {
s.RegisterService(&_Analyzer_serviceDesc, srv)
}
func _Analyzer_Analyze_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AnalyzeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(AnalyzerServer).Analyze(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/pulumirpc.Analyzer/Analyze",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(AnalyzerServer).Analyze(ctx, req.(*AnalyzeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Analyzer_GetPluginInfo_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(google_protobuf.Empty)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(AnalyzerServer).GetPluginInfo(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/pulumirpc.Analyzer/GetPluginInfo",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(AnalyzerServer).GetPluginInfo(ctx, req.(*google_protobuf.Empty))
}
return interceptor(ctx, in, info, handler)
}
var _Analyzer_serviceDesc = grpc.ServiceDesc{
ServiceName: "pulumirpc.Analyzer",
HandlerType: (*AnalyzerServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Analyze",
Handler: _Analyzer_Analyze_Handler,
},
{
MethodName: "GetPluginInfo",
Handler: _Analyzer_GetPluginInfo_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "analyzer.proto",
}
func init() { proto.RegisterFile("analyzer.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 287 bytes of a gzipped FileDescriptorProto
General prep work for refresh This change includes a bunch of refactorings I made in prep for doing refresh (first, the command, see pulumi/pulumi#1081): * The primary change is to change the way the engine's core update functionality works with respect to deploy.Source. This is the way we can plug in new sources of resource information during planning (and, soon, diffing). The way I intend to model refresh is by having a new kind of source, deploy.RefreshSource, which will let us do virtually everything about an update/diff the same way with refreshes, which avoid otherwise duplicative effort. This includes changing the planOptions (nee deployOptions) to take a new SourceFunc callback, which is responsible for creating a source specific to the kind of plan being requested. Preview, Update, and Destroy now are primarily differentiated by the kind of deploy.Source that they return, rather than sprinkling things like `if Destroying` throughout. This tidies up some logic and, more importantly, gives us precisely the refresh hook we need. * Originally, we used the deploy.NullSource for Destroy operations. This simply returns nothing, which is how Destroy works. For some reason, we were no longer doing this, and instead had some `if Destroying` cases sprinkled throughout the deploy.EvalSource. I think this is a vestige of some old way we did configuration, at least judging by a comment, which is apparently no longer relevant. * Move diff and diff-printing logic within the engine into its own pkg/engine/diff.go file, to prepare for upcoming work. * I keep noticing benign diffs anytime I regenerate protobufs. I suspect this is because we're also on different versions. I changed generate.sh to also dump the version into grpc_version.txt. At least we can understand where the diffs are coming from, decide whether to take them (i.e., a newer version), and ensure that as a team we are monotonically increasing, and not going backwards. * I also tidied up some tiny things I noticed while in there, like comments, incorrect types, lint suppressions, and so on.
2018-03-28 16:45:23 +02:00
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x6c, 0x91, 0x4f, 0x4b, 0xc3, 0x40,
0x10, 0xc5, 0x1b, 0x95, 0xda, 0x4e, 0xb5, 0xc2, 0x80, 0xb5, 0xae, 0x1e, 0x42, 0x4e, 0x39, 0x6d,
0x21, 0x22, 0x5e, 0x55, 0xfc, 0x7b, 0x93, 0x78, 0xf6, 0x90, 0x96, 0x49, 0x08, 0xa4, 0xd9, 0x75,
0xff, 0x1c, 0xe2, 0xa7, 0xf0, 0x23, 0x8b, 0xbb, 0x6b, 0x2c, 0xda, 0xdb, 0x0c, 0xef, 0xf1, 0x9b,
0x37, 0x33, 0x30, 0x2d, 0xda, 0xa2, 0xe9, 0x3e, 0x48, 0x71, 0xa9, 0x84, 0x11, 0x38, 0x96, 0xb6,
0xb1, 0xeb, 0x5a, 0xc9, 0x15, 0x3b, 0x90, 0x8d, 0xad, 0xea, 0xd6, 0x0b, 0xec, 0xac, 0x12, 0xa2,
0x6a, 0x68, 0xe1, 0xba, 0xa5, 0x2d, 0x17, 0xb4, 0x96, 0xa6, 0x0b, 0xe2, 0xf9, 0x5f, 0x51, 0x1b,
0x65, 0x57, 0xc6, 0xab, 0xc9, 0x1b, 0x4c, 0x6f, 0xfc, 0x94, 0x9c, 0xde, 0x2d, 0x69, 0x83, 0x08,
0x7b, 0xa6, 0x93, 0x34, 0x8f, 0xe2, 0x28, 0x1d, 0xe7, 0xae, 0xc6, 0x2b, 0x00, 0xa9, 0x84, 0x24,
0x65, 0x6a, 0xd2, 0xf3, 0x9d, 0x38, 0x4a, 0x27, 0xd9, 0x09, 0xf7, 0x60, 0xfe, 0x03, 0xe6, 0xaf,
0x0e, 0x9c, 0x6f, 0x58, 0x93, 0x27, 0x38, 0xea, 0xf1, 0x5a, 0x8a, 0x56, 0x13, 0x5e, 0xc2, 0xa8,
0x2c, 0xea, 0xc6, 0x2a, 0xd2, 0xf3, 0x28, 0xde, 0x4d, 0x27, 0xd9, 0x29, 0xef, 0x17, 0xe3, 0xc1,
0xfd, 0xe0, 0x1d, 0x79, 0x6f, 0x4d, 0xee, 0xfa, 0xa0, 0x41, 0x43, 0x06, 0xa3, 0x30, 0xa9, 0x0b,
0x61, 0xfb, 0x1e, 0x67, 0x30, 0x54, 0x54, 0x68, 0xd1, 0xba, 0xb0, 0xe3, 0x3c, 0x74, 0xd9, 0x67,
0x04, 0xa3, 0x80, 0x51, 0x78, 0x0b, 0xfb, 0xa1, 0xc6, 0x2d, 0x11, 0xc2, 0x3d, 0x18, 0xdb, 0x26,
0xf9, 0x5d, 0x92, 0x01, 0x5e, 0xc3, 0xe1, 0x23, 0x99, 0x17, 0xf7, 0x8d, 0xe7, 0xb6, 0x14, 0x38,
0xfb, 0x77, 0x96, 0xfb, 0xef, 0x67, 0xb0, 0xe3, 0x0d, 0xcc, 0xaf, 0x3d, 0x19, 0x2c, 0x87, 0xce,
0x78, 0xf1, 0x15, 0x00, 0x00, 0xff, 0xff, 0xff, 0x6d, 0x9c, 0x46, 0xee, 0x01, 0x00, 0x00,
}