From practice to principle, I will take you through gRPC.

  golang, grpc, php

image

Original address:From practice to principle, I will take you through gRPC.

GRPC is playing an extraordinary role in Go language. More and more small partners are using it. Recently, they have also made a wave of Amway. I hope this article can show you gRPC’s love and hate. This article is a long one. I hope you are ready to read it. The catalogue is as follows:

image

sketch

GRPC is a high performance, open source and universal RPC framework, designed for mobile and HTTP/2. Currently, C, Java and Go language versions are available: grpc, grpc-java, grpc-go. C version supports C, C++,Node. JS, Python, Ruby, Objective-C, PHP and C#.

GRPC is designed based on HTTP/2 standard, which brings such characteristics as bi-directional flow, flow control, header compression, multi-reuse requests over a single TCP connection, etc. These features make it perform better on mobile devices, saving more power and space.

Call model

image

1. the client (gRPC Stub) calls the a method to initiate an RPC call.

2. use Protobuf for object serialization compression (IDL) on request information.

3. After receiving the request, the server (gRPC Server) decodes the request body, performs service logic processing and returns.

4. use Protobuf for object serialization compression (IDL) on response results.

5. The client receives the response from the server and decodes the request body. Callback the called A method, wake up the client that is waiting for a response (blocking) and return the response result.

Call method

Unary RPC: unary RPC

image

Server

type SearchService struct{}

func (s *SearchService) Search(ctx context.Context, r *pb.SearchRequest) (*pb.SearchResponse, error) {
    return &pb.SearchResponse{Response: r.GetRequest() + " Server"}, nil
}

const PORT = "9001"

func main() {
    server := grpc.NewServer()
    pb.RegisterSearchServiceServer(server, &SearchService{})

    lis, err := net.Listen("tcp", ":"+PORT)
    ...

    server.Serve(lis)
}
  • To create a gRPC Server object, you can understand it as an abstract object on the server side.
  • Register the SearchService (which contains the server interface that needs to be called) to gRPC Server. The internal registration center of the. In this way, when receiving the request, the server interface can be found through internal “service discovery” and transferred for logical processing.
  • Create Listen and listen to TCP ports.
  • GRPC Server starts lis.Accept until Stop or GracefulStop.

Client

func main() {
    conn, err := grpc.Dial(":"+PORT, grpc.WithInsecure())
    ...
    defer conn.Close()

    client := pb.NewSearchServiceClient(conn)
    resp, err := client.Search(context.Background(), &pb.SearchRequest{
        Request: "gRPC",
    })
    ...
}
  • Creates a connection handle to a given target (server).
  • Create a client object for SearchService.
  • Send RPC request, wait for synchronous response, and return response result after callback.

II. Server-side streaming RPC: Server-Side Streaming RPC

image

Server

func (s *StreamService) List(r *pb.StreamRequest, stream pb.StreamService_ListServer) error {
    for n := 0; n <= 6; n++ {
        stream.Send(&pb.StreamResponse{
            Pt: &pb.StreamPoint{
                ...
            },
        })
    }

    return nil
}

Client

func printLists(client pb.StreamServiceClient, r *pb.StreamRequest) error {
    stream, err := client.List(context.Background(), r)
    ...
    
    for {
        resp, err := stream.Recv()
        if err == io.EOF {
            break
        }
        ...
    }

    return nil
}

III. Client-side streaming RPC: Client Streaming RPC

image

Server

func (s *StreamService) Record(stream pb.StreamService_RecordServer) error {
    for {
        r, err := stream.Recv()
        if err == io.EOF {
            return stream.SendAndClose(&pb.StreamResponse{Pt: &pb.StreamPoint{...}})
        }
        ...

    }

    return nil
}

Client

func printRecord(client pb.StreamServiceClient, r *pb.StreamRequest) error {
    stream, err := client.Record(context.Background())
    ...
    
    for n := 0; n < 6; n++ {
        stream.Send(r)
    }

    resp, err := stream.CloseAndRecv()
    ...

    return nil
}

IV. Bidirectional streaming RPC: Bidirectional Streaming RPC

image

Server

func (s *StreamService) Route(stream pb.StreamService_RouteServer) error {
    for {
        stream.Send(&pb.StreamResponse{...})
        r, err := stream.Recv()
        if err == io.EOF {
            return nil
        }
        ...
    }

    return nil
}

Client

func printRoute(client pb.StreamServiceClient, r *pb.StreamRequest) error {
    stream, err := client.Route(context.Background())
    ...

    for n := 0; n <= 6; n++ {
        stream.Send(r)
        resp, err := stream.Recv()
        if err == io.EOF {
            break
        }
        ...
    }

    stream.CloseSend()

    return nil
}

How does the client interact with the server

Before starting the analysis, we should first have an initial impression of the gRPC call. Then the simplest thing is to grab the Client and call the Server to analyze it and see what it has done in the whole process. The following figure:

image

  • Magic
  • SETTINGS
  • HEADERS
  • DATA
  • SETTINGS
  • WINDOW_UPDATE
  • PING
  • HEADERS
  • DATA
  • HEADERS
  • WINDOW_UPDATE
  • PING

We have found a total of 12 behaviors, which are more important. Before starting the analysis, I suggest you think about it first. What are their functions? Dare to guess, it is better to study with doubts.

Behavior analysis

Magic

image

The main function of Magic frame is to establish the preface of HTTP/2 request. In HTTP/2, both ends are required to send a connection preface as final confirmation of the protocol used and to determine the initial setting of HTTP/2 connection. The client and the server each send a different connection preface.

The Magic frame in the above figure is one of the forewords of the client, and the contents arePRI * HTTP/2.0\r\n\r\nSM\r\n\r\nTo ensure that HTTP/2 connections are enabled.

SETTINGS

image

image

The main function of the SETTINGS frame is to set the parameters of this connection. The scope is the entire connection rather than a single flow.

The SETTINGS frames in the previous figure are all empty SETTINGS frames, and figure 1 is the preface of client connection (Magic and SETTINGS frames form the preface of connection respectively). Figure 2 is for the server. In addition, we can see multiple SETTINGS frames from the figure. Why is this? This is because after sending the connection preface, the client and the server need to have an interactive confirmation. The corresponding is the SETTINGS frame with ACK identification.

HEADERS

image

HEADERS frames are mainly used to store and propagate HTTP header information. We have noticed some familiar information in HEADERS, as follows:

  • method:POST
  • scheme:http
  • path:/proto.SearchService/Search
  • authority::10001
  • content-type:application/grpc
  • user-agent:grpc-go/1.20.0-dev

You will find these things very familiar, in fact, they are all the basic attributes of gRPC. In fact, they are far more than these, just how many are set and displayed. For example, as usualgrpc-timeoutgrpc-encodingIt is also set here.

DATA

image

The main function of the DATA frame is to load the main body information, which is the data frame. In the above figure, it is obvious that our request parameter gRPC is stored in it. All you need to know is this.

HEADERS, DATA, HEADERS

image

In the above figure, the HEADERS frame is relatively simple, which tells us the HTTP response status and the content format of the response.

imgae

The DATA frame in the above figure mainly carries the data set of response results, and the gRPC Server in the figure is the response result of our RPC method.

image

In the above figure, the HEADERS frame mainly carries gRPC status and gRPC status messages. in the figuregrpc-statusAndgrpc-messageIs the result of our gRPC call status.

Other steps

WINDOW_UPDATE

The main function is to manage and control the flow of windows. Normally, when a connection is opened, the server and client exchange SETTINGS frames immediately to determine the size of the flow control window. By default, the size is set to about 65 KB, but different sizes can be set for flow control by issuing a WINDOW_UPDATE frame.

image

PING/PONG

The main function is to judge whether the current connection is still available, and is also commonly used to calculate round trip time. In fact, it is PING/PONG, and everyone should be familiar with it.

Summary

image

  • Before the connection is established, the client/server will sendConnection preface(Magic+SETTINGS) to establish protocols and configuration items.
  • When transmitting data, flow control strategies such as WINDOW_UPDATE are involved.
  • When the gRPC additional information is transmitted, it is transmitted and set based on the HEADERS frame. While the specific request/response DATA is in the stored data frame.
  • The request/response results can be divided into HTTP and gRPC status responses.
  • When the client initiates a PING, the server will respond to PONG or vice versa.

The basic use of this gRPC, you can look at my otherGRPC primer seriesI believe it will be helpful to you.

On Understanding

Server side

image

Why can a gRPC Server be created with four lines of code, and what logic is done internally? Have you thought about it? Next, we will take a step-by-step analysis to see who is inside.

I. initialization

// grpc.NewServer()
func NewServer(opt ...ServerOption) *Server {
    opts := defaultServerOptions
    for _, o := range opt {
        o(&opts)
    }
    s := &Server{
        lis:    make(map[net.Listener]bool),
        opts:   opts,
        conns:  make(map[io.Closer]bool),
        m:      make(map[string]*service),
        quit:   make(chan struct{}),
        done:   make(chan struct{}),
        czData: new(channelzData),
    }
    s.cv = sync.NewCond(&s.mu)
    ...

    return s
}

This piece is relatively simple, mainly is the instance grpc.Server and carries on the initialization action. It involves the following:

  • Lis: list of listening addresses.
  • Opts: service options, this section contains Credentials, Interceptor and some basic configurations.
  • Conns: list of client connection handles.
  • M: service information mapping.
  • Quit: exit signal.
  • Done: signal complete.
  • CzData: used to store channelz related data of ClientConn, addrConn and Server.
  • Cv: When gracefully exiting, it will wait for this semaphore and will not continue processing until all RPC requests are processed and disconnected.

II. Registration

pb.RegisterSearchServiceServer(server, &SearchService{})

Step 1: serviceinterface

// search.pb.go
type SearchServiceServer interface {
    Search(context.Context, *SearchRequest) (*SearchResponse, error)
}

func RegisterSearchServiceServer(s *grpc.Server, srv SearchServiceServer) {
    s.RegisterService(&_SearchService_serviceDesc, srv)
}

Do you remember the Protobuf that we usually write? In the generated.pb.goFile, will define the specific implementation constraints of Service APIs interface. When we register with gRPC Server, we will pass in the function interface implementation of application Service, which is generated at this timeRegisterServerThe method will ensure the consistency between the two.

Step 2: serviceidl

Do you want to pass it around? Impossible, please define the interface method that is consistent with Protobuf. But that&_SearchService_serviceDescWhat is the effect? The code is as follows:

// search.pb.go
var _SearchService_serviceDesc = grpc.ServiceDesc{
    ServiceName: "proto.SearchService",
    HandlerType: (*SearchServiceServer)(nil),
    Methods: []grpc.MethodDesc{
        {
            MethodName: "Search",
            Handler:    _SearchService_Search_Handler,
        },
    },
    Streams:  []grpc.StreamDesc{},
    Metadata: "search.proto",
}

This looks like the description code of the service, which is used to express internally what “I” has. It involves the following:

  • ServiceName: service name
  • HandlerType: Service interface, used to check whether the implementation provided by the user meets the interface requirements.
  • Methods: unary method set, pay attention to theHandlerThe method, which corresponds to the final RPC processing method, is used during the RPC method execution phase.
  • Streams: streaming method set
  • Metadata: Metadata is a description of data attributes. This is mainly described here.SearchServiceServerService

Step 3: Register Service

func (s *Server) register(sd *ServiceDesc, ss interface{}) {
    ...
    srv := &service{
        server: ss,
        md:     make(map[string]*MethodDesc),
        sd:     make(map[string]*StreamDesc),
        mdata:  sd.Metadata,
    }
    for i := range sd.Methods {
        d := &sd.Methods[i]
        srv.md[d.MethodName] = d
    }
    for i := range sd.Streams {
        ...
    }
    s.m[sd.ServiceName] = srv
}

In the last step, we will register the previous service interface information and service description information internally.serviceTo facilitate the use of subsequent actual calls. It involves the following:

  • Server: Interface Information for Service
  • Md: RPC method set for meta service
  • Sd: RPC method set for streaming services
  • Mdata:metadata, metadata

Summary

In this chapter, the sorting and registration behavior of gRPC Server before startup is mainly introduced. It looks very simple, but in fact everything is prepared in advance for subsequent actual operation. So let’s sort out our thinking and look at it in series, as follows:

image

Iii. monitoring

Next, in the whole process, the most important and most concerned listening/processing phase is as follows:

func (s *Server) Serve(lis net.Listener) error {
    ...
    var tempDelay time.Duration 
    for {
        rawConn, err := lis.Accept()
        if err != nil {
            if ne, ok := err.(interface {
                Temporary() bool
            }); ok && ne.Temporary() {
                if tempDelay == 0 {
                    tempDelay = 5 * time.Millisecond
                } else {
                    tempDelay *= 2
                }
                if max := 1 * time.Second; tempDelay > max {
                    tempDelay = max
                }
                ...
                timer := time.NewTimer(tempDelay)
                select {
                case <-timer.C:
                case <-s.quit:
                    timer.Stop()
                    return nil
                }
                continue
            }
            ...
            return err
        }
        tempDelay = 0

        s.serveWG.Add(1)
        go func() {
            s.handleRawConn(rawConn)
            s.serveWG.Done()
        }()
    }
}

Serve will call different listening modes according to the different Listener passed in from outside. This is also truenet.ListenerThe charm, flexibility and expandability of the will be relatively high. And the most common one in gRPC Server isTCPConnBased on TCP Listener. Next, let’s look at the specific processing logic, as follows:

image

  • Loop through connectionslis.AcceptTake out the connection. If there are no connections to be processed in the queue, blocking waiting will be formed.
  • Iflis.AcceptIf it fails, the sleep mechanism will be triggered. If it fails for the first time, it will sleep for 5ms; otherwise, it will double; if it fails again, it will continue to double until the upper sleep time is 1s, and it will try to take the next “it” after the sleep is completed.
  • Iflis.AcceptIf successful, reset the sleep time count and start a new goroutine call.handleRawConnMethod to execute/process new requests, that is, everyone likes to say “each request is processed by a different goroutine.”
  • During the cycle, the scenario of “quitting” the service is included, mainly in two cases: hard shutdown and graceful restart of the service.

Client

image

First, create a dial-up connection

// grpc.Dial(":"+PORT, grpc.WithInsecure())
func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) {
    cc := &ClientConn{
        target:            target,
        csMgr:             &connectivityStateManager{},
        conns:             make(map[*addrConn]struct{}),
        dopts:             defaultDialOptions(),
        blockingpicker:    newPickerWrapper(),
        czData:            new(channelzData),
        firstResolveEvent: grpcsync.NewEvent(),
    }
    ...
    chainUnaryClientInterceptors(cc)
    chainStreamClientInterceptors(cc)

    ...
}

grpc.DialThe method is actually forgrpc.DialContextThe difference is thatctxIs directly introducedcontext.Background. Its main functions areCreateConnect with the client of the given target, which has the following responsibilities:

  • Initialize ClientConn
  • Initialize (process LB based) load balancing configuration
  • Initialize channelz
  • Initialize retry rules and client unary/streaming interceptors
  • Initialize the basic information on the protocol stack
  • Timeout Control for Related context
  • Initialize and analyze address information
  • Create a connection with the server

Even not even

Before I heard some people say callgrpc.DialAfter that, the client has already established a connection with the server, but is this right? Let’s have a bird’s eye view of the whole picture and see the running goroutine. As follows:

image

We can have several core methods waiting for/processing signals all the time, which can be known by analyzing the underlying source code. It involves the following:

func (ac *addrConn) connect()
func (ac *addrConn) resetTransport()
func (ac *addrConn) createTransport(addr resolver.Address, copts transport.ConnectOptions, connectDeadline time.Time)
func (ac *addrConn) getReadyTransport()

Here, the main analysis of goroutine tipsresetTransportMethods, look at what has been done. The core code is as follows:

func (ac *addrConn) resetTransport() {
    for i := 0; ; i++ {
        if ac.state == connectivity.Shutdown {
            return
        }
        ...
        connectDeadline := time.Now().Add(dialDuration)
        ac.updateConnectivityState(connectivity.Connecting)
        newTr, addr, reconnect, err := ac.tryAllAddrs(addrs, connectDeadline)
        if err != nil {
            if ac.state == connectivity.Shutdown {
                return
            }
            ac.updateConnectivityState(connectivity.TransientFailure)
            timer := time.NewTimer(backoffFor)
            select {
            case <-timer.C:
                ...
            }
            continue
        }

        if ac.state == connectivity.Shutdown {
            newTr.Close()
            return
        }
        ...
        if !healthcheckManagingState {
            ac.updateConnectivityState(connectivity.Ready)
        }
        ...

        if ac.state == connectivity.Shutdown {
            return
        }
        ac.updateConnectivityState(connectivity.TransientFailure)
    }
}

In this method, attempts are made to create a connection continuously, and if successful, it ends. Otherwise, it is constantly based onBackoffThe retry mechanism of the algorithm tries to create a connection until it succeeds. From the conclusion, simply callDialContextThe connection is established asynchronously, that is, it does not take effect immediately, and is located atConnectingStatus, and formally to arriveReadyStatus is only available.

Is it really connected

image

On the bag grabbing tool, it is suggested that there is not a single bag, so is this really connected? I think this is a question of expression. We should be as strict as possible. If you really want to passDialContextMethod to get through the connection with the server, you need to callWithBlockMethod, although it will cause blocking waiting, the final connection will arriveReadyStatus (handshake succeeded). The following figure:

image

II. Instantiating Service API

type SearchServiceClient interface {
    Search(ctx context.Context, in *SearchRequest, opts ...grpc.CallOption) (*SearchResponse, error)
}

type searchServiceClient struct {
    cc *grpc.ClientConn
}

func NewSearchServiceClient(cc *grpc.ClientConn) SearchServiceClient {
    return &searchServiceClient{cc}
}

This is the example serviceinterface, which is relatively simple.

III. Invocation

// search.pb.go
func (c *searchServiceClient) Search(ctx context.Context, in *SearchRequest, opts ...grpc.CallOption) (*SearchResponse, error) {
    out := new(SearchResponse)
    err := c.cc.Invoke(ctx, "/proto.SearchService/Search", in, out, opts...)
    if err != nil {
        return nil, err
    }
    return out, nil
}

The RPC method generated by proto is more like a packing box, putting what is needed in it, but what is actually called isgrpc.invokeMethods. As follows:

func invoke(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error {
    cs, err := newClientStream(ctx, unaryStreamDesc, cc, method, opts...)
    if err != nil {
        return err
    }
    if err := cs.SendMsg(req); err != nil {
        return err
    }
    return cs.RecvMsg(reply)
}

At a glance, you can focus on three calls. As follows:

  • NewClientStream: obtain the transport layer Trasport and package it into ClientStream for return. this will involve load balancing, timeout control, Encoding, and Stream actions, which are basically consistent with those of the server.
  • Cs.SendMsg: Sends RPC requests out, but it does not assume the function of waiting for a response.
  • Cs.RecvMsg: blocks RPC method response results waiting to be received.

Connection

// clientconn.go
func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method string) (transport.ClientTransport, func(balancer.DoneInfo), error) {
    t, done, err := cc.blockingpicker.pick(ctx, failfast, balancer.PickOptions{
        FullMethodName: method,
    })
    if err != nil {
        return nil, nil, toRPCErr(err)
    }
    return t, done, nil
}

InnewClientStreamIn the method, we passgetTransportThe method obtains the clientTransport and ServerTransport abstracted from the transport layer, which is actually to obtain a connection for subsequent RPC call transmission.

Fourth, close the connection

// conn.Close()
func (cc *ClientConn) Close() error {
    defer cc.cancel()
    ...
    cc.csMgr.updateState(connectivity.Shutdown)
    ...
    cc.blockingpicker.close()
    if rWrapper != nil {
        rWrapper.close()
    }
    if bWrapper != nil {
        bWrapper.close()
    }

    for ac := range conns {
        ac.tearDown(ErrClientConnClosing)
    }
    if channelz.IsOn() {
        ...
        channelz.AddTraceEvent(cc.channelzID, ted)
        channelz.RemoveEntry(cc.channelzID)
    }
    return nil
}

This method cancels the ClientConn context and closes all underlying transfers. It involves the following:

  • Context Cancel
  • Empty and close client connections
  • Empty and close resolver connections
  • Empty and close load balancing connections
  • Add trace reference
  • Remove current channel information

Q&A

1. What transmission is GRPC metadata transmitted through?

image

2. Will calling grpc.Dial really connect to the server?

Yes, but it is connected asynchronously, and the connection status is connecting. But if you set upgrpc.WithBlockOption, will block waiting (waiting for handshake to succeed). In addition, you need to note that when not setgrpc.WithBlock, ctx timeout control has no effect on it.

3. will calling ClientConn not Close cause the leak?

Yes, unless your client is not a resident process, it will passively recycle resources at the end of the application. But if it is a permanent process, you really forget to implement itCloseStatement, will cause the leak. The following figure:

3.1. Client

image

3.2. Server

image

3.3. TCP

image

4. What will happen if overtime calls are not controlled?

There will be no problems in a short period of time, but the savings will continue to leak, and in the end, of course, the service cannot provide a response. The following figure:

image

5. Why can’t the default interceptor send more than one?

func chainUnaryClientInterceptors(cc *ClientConn) {
    interceptors := cc.dopts.chainUnaryInts
    if cc.dopts.unaryInt != nil {
        interceptors = append([]UnaryClientInterceptor{cc.dopts.unaryInt}, interceptors...)
    }
    var chainedInt UnaryClientInterceptor
    if len(interceptors) == 0 {
        chainedInt = nil
    } else if len(interceptors) == 1 {
        chainedInt = interceptors[0]
    } else {
        chainedInt = func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error {
            return interceptors[0](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, 0, invoker), opts...)
        }
    }
    cc.dopts.unaryInt = chainedInt
}

When there are multiple interceptors, the first interceptor is taken. Therefore, the conclusion is that multiple transmissions are allowed, but it is useless.

6. What if multiple interceptors are really needed?

Can be usedgo-grpc-middlewareProvidedgrpc.UnaryInterceptorAndgrpc.StreamInterceptorThe chain method is convenient, quick and worry-free.

It is not enough to use it alone. Let’s take a deeper look and see how it is realized. The core code is as follows:

func ChainUnaryClient(interceptors ...grpc.UnaryClientInterceptor) grpc.UnaryClientInterceptor {
    n := len(interceptors)
    if n > 1 {
        lastI := n - 1
        return func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
            var (
                chainHandler grpc.UnaryInvoker
                curI         int
            )

            chainHandler = func(currentCtx context.Context, currentMethod string, currentReq, currentRepl interface{}, currentConn *grpc.ClientConn, currentOpts ...grpc.CallOption) error {
                if curI == lastI {
                    return invoker(currentCtx, currentMethod, currentReq, currentRepl, currentConn, currentOpts...)
                }
                curI++
                err := interceptors[curI](currentCtx, currentMethod, currentReq, currentRepl, currentConn, chainHandler, currentOpts...)
                curI--
                return err
            }

            return interceptors[0](ctx, method, req, reply, cc, chainHandler, opts...)
        }
    }
    ...
}

When the number of interceptors is greater than 1, frominterceptors[1]Start recursion, every recursive interceptorinterceptors[i]Will continue to implement, finally really to implementhandlerMethods. At the same time, people often ask what is the execution sequence of the interceptor. Have you come to a conclusion through this code?

7. what is the problem with creating ClientConn frequently?

This problem we can reverse the verification, assuming no public ClientConn to see what will happen? As follows:

func BenchmarkSearch(b *testing.B) {
    for i := 0; i < b.N; i++ {
        conn, err := GetClientConn()
        if err != nil {
            b.Errorf("GetClientConn err: %v", err)
        }
        _, err = Search(context.Background(), conn)
        if err != nil {
            b.Errorf("Search err: %v", err)
        }
    }
}

Output results:

    ... connection error: desc = "transport: Error while dialing dial tcp :10001: socket: too many open files"
    ... connection error: desc = "transport: Error while dialing dial tcp :10001: socket: too many open files"
    ... connection error: desc = "transport: Error while dialing dial tcp :10001: socket: too many open files"
    ... connection error: desc = "transport: Error while dialing dial tcp :10001: socket: too many open files"
FAIL
exit status 1

When your application scenario is to generate/call ClientConn at the same time with high frequency, it may cause the system to occupy too many file handles. In this case, you can change the mode in which the application generates/calls ClientConn, or pool it. this can be referred togrpc-go-poolProject.

8. Will the client retry by default after the request fails?

Retry will continue until the context is cancelled. In terms of retry time, backoff algorithm is adopted as the reconnection mechanism, and the default maximum retry time interval is 120s s.

9. Why use HTTP/2 as the transport protocol?

Many clients need to access the network through HTTP proxy. gRPC is all implemented by HTTP/2. When the proxy starts to support HTTP/2, gRPC data can be transparently forwarded. Not only that, reverse agents responsible for load balancing, access control, etc. can be seamlessly compatible with gRPC, which is a lot more scientific than Thrift, which designs the wire protocol itself. @ cteller @ teng yifei

10. Is there a problem with gRPC load balancing in Kubernetes?

GRPC’s RPC protocol is implemented based on HTTP/2 standard. One of the major features of HTTP/2 is that it does not need to re-establish a new connection every time a request is made, just like HTTP/1.1, but it will reuse the original connection.

Therefore, this will cause kube-proxy to load balance only when the connection is established, and every RPC request after this will use the original connection, so in fact every subsequent RPC request will run to the same place.

Note: When k8s service is used for load balancing

Summary

  • GRPC is based on HTTP/2+Protobuf.
  • GRPC has four calling modes, namely, one yuan, server/client streaming and two-way streaming.
  • The additional information of gRPC will be reflected in the HEADERS frame and the DATA will be on the DATA frame.
  • If the Client requests grpc.Dial is used to establish the connection asynchronously by default, then the status is Connecting.
  • Client requests to call WithBlock () if synchronization is required, and the completion status is Ready.
  • Server monitoring is to wait for connection circularly. If there is no connection, it will sleep, and the maximum sleep time is 1s. If a new request is received, a new goroutine will be started to handle it.
  • If grpc.ClientConn does not close the connection, it will lead to leaks such as goroutine and Memory.
  • If no timeout control is applied to any internal/external calls, leakage will occur and the client will try again and again.
  • In a specific scenario, if grpc.ClientConn is not regulated, the call will be affected.
  • If the interceptor does not use go-grpc-middleware chain processing, it will be overwritten.
  • Caution is required when selecting gRPC’s responsible balancing mode.

References