Meta-Driven - Part 1

Published 05-19-2019 15:48:22

Sometimes we just need “factory” services. I'm not talking about the typical Java-style factories you're used to. I'm talking about simple CRUD services for fairly simple data models that feed hundreds, if not thousands of “downstream” processes. Using JSON, one example of an overly simplified data model could look like:


{
    "source": {
        "vendor": "Some Institution",
        "filename": "file123.xlsx",
        "correlation_id": "b2eefb6e-779a-4c08-b0a5-006c401ea2fa"
    },
    "some_record_name": "Abc 123",
    "amount": 1022.41,
    "different_amount": 99.2,
    "applicable_date": 1557965341334661013,
    "created": 1557965202009530561,
    "created_by": "some_user",
    "updated": 1557965202009530561,
    "updated_by": "some_user"
}

A lot of different teams at a company probably need this data, but not all the teams have shared services nor using services half the time. At this mythical company, there are likely a lot of proprietary data formats from a variety of vendors/partners that exist in the form of Excel spreadsheets, text documents, CSV's, etc. while being shipped around with mechanisms ranging from email, to SFTP, to plain ol’ copy & paste to a shared storage service. This leads to a lot of “duplication of effort”, “you can get data X in a spreadsheet from Brina and I think Alex has data Y", or just “I don't know where we get it from”. Happens all the time.

So this post is not about the transformation process of getting all your vendor data and their 15 different formats consolidated. That's a subject all on its own that we'll go over another time. I've got some thoughts on that. For now, we'll just assume that piece isn't an issue. We're on to the after party where everyone is in the same head space (i.e. normalized data). You just want your data. You want others to have your data. You know the format. You want to create, retrieve, update, and delete.

See the title

I want a web interface that lets me generate a model. You give it a name, add some fields with their respective data types, click some saves… BOOM! You've got a service.

Behind the magic curtains, JSON is sent back to a Go server (because we all know I love me some Go). The server generates a few protobuf files and some project assests. make runs protoc (the protobuf compiler), which produces additional Go code. git commit, some CICD hand-wavey magic, and we have a new running CRUD service.

Now, I did a lot of “it's just this easy” on purpose. This is how I saw the magic in my head, along with some “meh, I'm pretty sure I can do that” confidence. So, I chatted about it non-stop for weeks (if not months) with several friends, of which I apologize. I can get annoying when an idea takes hold. Anyhow… one particular individual introduced me to the idea of meta-data driven architecture. Not a new concept to the world, but knew to me. So, I started digging a little and it fit my mental model. Now, I had a name I could put on the idea that people might be able to understand and get behind.

Where to start

Recently, I took an opportunity to actually get my hands dirty on the meta-driven concept. I was tired of talking and ready to actually do. I pulled out my 2 favorite prototyping tools, Go and MongoDB. I also took the opportunity to do something substantial with protobufs.

A little tangient on protobufs. Like most devs, I like new and shiny things. I like reasons to toy around with new or different concepts, ideas, languages, etc. This isn't one of those times. In this particular instance, I wanted to use protobufs for a VERY specific reason and not as a “new and shiney”. The protoc compiler actually has plugins for multiple languages. In a lot of enterprise situations I've been a part of, Java x Spring Boot seem to be the tool of choice. NodeJS, .net, a healthy amount of Python, are definitely in the stack, with Go gradually becoming a part. All this to say, we as a community have a colourful landscape when it comes to languages. With the ability to generate code with a protobuf file, for a good portion of our stack, it seems to be a compelling point. If we have our dope auto service spun up in Go and we want a dev in another part of the world to be able to quickly consume the content without having to spend a day or 2 writing and testing a hand coded class file… well, there ya go. As part of our auto-magic we generate a library artifact, publish it in your favorite repository. Our dev friend now can add the resource reference to their gradle file and they're off to the races.

Ok, so… How do we do that.

Lets make a dope app

We'll want to start off with creating a dummy app. Get it up and running first, then go back and strip it for parts. Lets assume we have a directory called super/. Create a subdirectory called proto/ and add our .proto files.

record.proto

This should be on all your models. This is the core info that describes the record itself and who's done anything to it. This does not necessarily reflect the data contained within the record, just the record itself.

syntax = "proto3";
package super;

import "google/protobuf/timestamp.proto";

option go_package = "github.com/elliottpok/super";
option java_multiple_files = true;
option java_outer_classname = "RecordProto";
option java_package = "com.elliottpolk.super";

message RecordInfo {
    google.protobuf.Timestamp created = 1;
    string created_by = 2;

    google.protobuf.Timestamp updated = 3;
    string updated_by = 4;

    enum Status {
        draft = 0;
        active = 1;
        archived = 2;
        invalid = 3;
    }

    Status status = 5;
}

dope.proto

This is our actual model. This is what we'll be serving up in our API. It should contain the record info we just created above as well as a unique ID. We added a few arbitrary fields for the purpose of this example because it wouldn't make sense to not include data fields.

syntax = "proto3";
package super;

import "record.proto";

option go_package = "github.com/elliottpolk/super";
option java_multiple_files = true;
option java_outer_classname = "DopeProto";
option java_package = "com.elliottpolk.super";

message Dope {
    // standard record values
    super.RecordInfo record_info = 1;

    // unique identifier
    string id = 2;

    // super dope field 1
    string dope_1 = 3;

    // super dope field 2
    int32 dope_2 = 4;

    // super dope field 3
    repeated string dope_3 = 5;

    // super dope field 4
    bool dope_4 = 6;
}

dopeservice.proto

This one is for the actual API CRUD. There's a bit extra on the imports that allow us to generate the endpoint code automatically.

syntax = "proto3";
package super;

import "dope.proto";
import "google/api/annotations.proto";

option go_package = "github.com/elliottpolk/super";
option java_multiple_files = true;
option java_outer_classname = "DopeServiceProto";
option java_package = "com.elliottpolk.super";

message Empty {
    // unique identifier of the original incoming request to help troubleshoot
    string request_id = 1;
}

message DopeRequest {
    // unique identifier to help troubleshoot each request
    string request_id = 1;

    // username of the one making the request
    string username = 2;

    // unique identifier of the super.Dope
    string id = 3;

    // dataset to process
    repeated super.Dope payload = 4;
}

message DopeResponse {
    // unique identifier of the original incoming request to help troubleshoot
    string request_id = 1;

    repeated super.Dope payload = 2;
}

service DopeService {
    // create new Dope item(s)
    rpc Create(DopeRequest) returns (Empty) {
        option (google.api.http) = {
            post: "/api/v1/dopes"
            body: "*"
        };
    }

    // retrieve a list of Dope items
    rpc Retrieve(DopeRequest) returns (DopeResponse) {
        option (google.api.http) = {
            get: "/api/v1/dopes"

            additional_bindings {
                get: "/api/v1/dopes/{id}"
            }
        };
    }

    // update Dope item(s)
    rpc Update(DopeRequest) returns (DopeResponse) {
        option (google.api.http) = {
            put: "/api/v1/dopes/{id}"
            body: "*"
        };
    }

    // delete Dope item(s)
    rpc Delete(DopeRequest) returns (Empty) {
        option (google.api.http) = {
            delete: "/api/v1/dopes"
            body: "*"
        };
    }
}

Generation time

We now need to run protoc on each file. Along with the before and after of the project directory, below is a little bash loop doing just that.

# before
$ tree
.
└── proto
    ├── dope.proto
    ├── dopeservice.proto
    └── record.proto

1 directory, 3 files

# for loop to run the protoc command
$ for i in `ls proto`;   \
do                       \
    protoc			     \
      -Iproto			 \
      -I${GOPATH}/src    \
      -I${PWD}/proto     \
      -I${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
      --go_out=plugins=grpc,paths=source_relative:. \
      --grpc-gateway_out=logtostderr=true,paths=source_relative,allow_delete_body=true:. \
      "proto/${i}";     \
done

# generates ...
$ tree
.
├── dope.pb.go
├── dopeservice.pb.go
├── dopeservice.pb.gw.go
├── proto
│   ├── dope.proto
│   ├── dopeservice.proto
│   └── record.proto
└── record.pb.go

1 directory, 7 files

This does me no good…

At the moment, this code doesn't really do anything. It gives you the model and some abilities to serialize the model into JSON or protobuf format. Other than that, no server, no read/write from the database. Useless. This is were we need to step in with a bit of our own code. I promise that later we won't need to do this. Remember, we're writing the app we want first and then reversing it.

Let's get the main part of the app out of the way. I prefer to use the urfave/cli for all my TUI needs. In my cmd/ dir, we have our main.go file that looks like:

package main

import (
	"context"
	"os"

	"github.com/elliottpolk/super/config"
	"github.com/elliottpolk/super/grpc"
	"github.com/elliottpolk/super/rest"

	log "github.com/sirupsen/logrus"
	cli "gopkg.in/urfave/cli.v2"
	altsrc "gopkg.in/urfave/cli.v2/altsrc"
)

var (
	RpcPortFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "rpc-port",
		Value:   "7000",
		Usage:   "RPC port to listen on",
		EnvVars: []string{"DOPE_RPC_PORT"},
	})

	HttpPortFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "http-port",
		Value:   "8080",
		Usage:   "HTTP port to listen on",
		EnvVars: []string{"DOPE_HTTP_PORT"},
	})

	HttpsPortFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "tls-port",
		Value:   "8443",
		Usage:   "HTTPS port to listen on",
		EnvVars: []string{"DOPE_HTTPS_PORT"},
	})

	TlsCertFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "tls-cert",
		Usage:   "TLS certificate file for HTTPS",
		EnvVars: []string{"DOPE_TLS_CERT"},
	})

	TlsKeyFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "tls-key",
		Usage:   "TLS key file for HTTPS",
		EnvVars: []string{"DOPE_TLS_KEY"},
	})

	DatastoreAddrFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "datastore-addr",
		Aliases: []string{"ds-addr", "dsa"},
		Usage:   "Database address",
	})

	DatastorePortFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "datastore-port",
		Aliases: []string{"ds-port", "dsp"},
		Value:   "27017",
		Usage:   "Database port",
	})

	DatastoreNameFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "datastore-name",
		Aliases: []string{"ds-name", "dsn"},
		Value:   "super",
		Usage:   "Database name",
	})

	DatastoreUserFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "datastore-user",
		Aliases: []string{"ds-user", "dsu"},
		Usage:   "Database user",
	})

	DatastorePasswordFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "datastore-password",
		Aliases: []string{"ds-password", "dspwd"},
		Usage:   "Database password",
	})

	CfgFlag = altsrc.NewStringFlag(&cli.StringFlag{
		Name:    "config",
		Aliases: []string{"c", "cfg", "confg"},
		Usage:   "optional path to config file",
	})

	flags = []cli.Flag{
		CfgFlag,
		RpcPortFlag,
		HttpPortFlag,
		HttpsPortFlag,
		TlsCertFlag,
		TlsKeyFlag,
		DatastoreAddrFlag,
		DatastorePortFlag,
		DatastoreNameFlag,
		DatastoreUserFlag,
		DatastorePasswordFlag,
	}
)

func main() {
	app := cli.App{
		Name:  "dope",
		Flags: flags,
		Before: func(ctx *cli.Context) error {
			if len(ctx.String(CfgFlag.Name)) > 0 {
				return altsrc.InitInputSourceWithContext(flags, altsrc.NewYamlSourceFromFlagFunc("config"))(ctx)
			}
			return nil
		},
		Action: func(ctx *cli.Context) error {
			// read in the configuration
			comp := &config.Composition{
				Server: &config.ServerCfg{
					RpcPort:   ctx.String(RpcPortFlag.Name),
					HttpPort:  ctx.String(HttpPortFlag.Name),
					HttpsPort: ctx.String(HttpsPortFlag.Name),
					TlsCert:   ctx.String(TlsCertFlag.Name),
					TlsKey:    ctx.String(TlsKeyFlag.Name),
				},
				Db: &config.DbCfg{
					Addr:     ctx.String(DatastoreAddrFlag.Name),
					Port:     ctx.String(DatastorePortFlag.Name),
					DbName:   ctx.String(DatastoreNameFlag.Name),
					User:     ctx.String(DatastoreUserFlag.Name),
					Password: ctx.String(DatastorePasswordFlag.Name),
				},
			}

			// run in a non-blocking goroutine since it is blocking
			go func() {
				if err := rest.Serve(context.Background(), comp); err != nil {
					log.Fatal(err)
				}
			}()

			// use this one to block and prevent exiting
			if err := grpc.Serve(context.Background(), comp); err != nil {
				return cli.Exit(err, 1)
			}

			return nil
		},
	}

	app.Run(os.Args)
}

This contains a bit of extra code that we'll write next, starting with the config first, then switch over to the rest and grpc packages. It seems like a bit of overkill with the flags, but I follow a thought from @kelseyhightower to give my application every chance to config and boot.

Back at the root of our project, create the directory config/ and add the file config/composition.go. It'll have our datastore and HTTP server definitions along with a helper method to produce the MongoDB connection string as illustrated here:

package config

import "fmt"

type ServerCfg struct {
	RpcPort   string
	HttpPort  string
	HttpsPort string
	TlsCert   string
	TlsKey    string
}

type DbCfg struct {
	Addr     string
	Port     string
	DbName   string
	User     string
	Password string
}

type Composition struct {
	Server *ServerCfg
	Db     *DbCfg
}

func (db *DbCfg) ConnString() string {
	uri := "mongodb://"
	if len(db.User) > 0 && len(db.Password) > 0 {
		uri = fmt.Sprintf("%s%s:%s@", uri, db.User, db.Password)
	}

	if addr := db.Addr; len(addr) > 0 {
		uri = fmt.Sprintf("%s%s", uri, addr)
	}

	if port := db.Port; len(port) > 0 {
		uri = fmt.Sprintf("%s:%s", uri, port)
	}

	return uri
}

Here we pull all of our errors into one place, error.go

package super

import "errors"

var (
	ErrImmutableValue          = errors.New("attempting to edit immutable value")
	ErrInvalidRecordInfo       = errors.New("invalid record info")
	ErrInvalidCreatedBy        = errors.New("invalid created by value")
	ErrInvalidUpdatedBy        = errors.New("invalid updated by value")
	ErrDuplicateRecord         = errors.New("duplicate record for provided id")
	ErrNotFound                = errors.New("no valid record for provided id")
	ErrInvalidId               = errors.New("no valid id for provided record")
	ErrInvalidUsername         = errors.New("no valid username provided")
	ErrIncompleteAction        = errors.New("not all records were properly acted on")
	ErrMutlipleRecordsReturned = errors.New("multiple records returned")
	ErrNoMongoClient           = errors.New("no valid mongo client")
)

For the next 2 sets of code, one could argue they can be merged. I'm keeping them separate because I've found it's a bit easier to write tests. Yes… I have skipped over the tests in this particular instance, but I will come back in future posts on testing. I'm still working to improve my own testing skills.

dope.go is the binding logic to write to the datastore, where dopeservice.go wraps those functions and exposes them to the servers.

dope.go

package super

import (
	"context"
	"time"

	"github.com/golang/protobuf/ptypes/timestamp"
	"github.com/google/uuid"
	"github.com/pkg/errors"
	log "github.com/sirupsen/logrus"
	"go.mongodb.org/mongo-driver/bson"
	"go.mongodb.org/mongo-driver/mongo"
)

const repo string = "dope"

// Create will validate fields for the provided records and attempt to create
// new Dope records in the datastore
func Create(ctx context.Context, items []*Dope, db *mongo.Database) error {
	// swap to []interface{} because mongo needs it
	in := make([]interface{}, len(items))

	// validate / enrich required fields
	for i, item := range items {
		// the provider must specify at least the CreatedBy value
		if item.RecordInfo == nil {
			return ErrInvalidRecordInfo
		}

		// verify created_by is populated for an attempt at an audit
		if len(item.RecordInfo.CreatedBy) < 1 {
			return ErrInvalidCreatedBy
		}

		// ensure the Dope has an unique identifier
		if len(item.Id) < 1 {
			item.Id = uuid.New().String()
		}

		// ensure the created value is populated
		if item.RecordInfo.Created == nil || item.RecordInfo.Created.Seconds < 1 {
			item.RecordInfo.Created = &timestamp.Timestamp{Seconds: time.Now().Unix()}
		}

		in[i] = item
	}

	// write Dope to datastore
	if _, err := db.Collection(repo).InsertMany(ctx, in); err != nil {
		return err
	}

	// return the written element
	return nil
}

// RetrieveOne ...
func RetrieveOne(ctx context.Context, id string, db *mongo.Database) (*Dope, error) {
    res, err := Retrieve(ctx, bson.D{{"_id", id}}, db)
    if err != nil {
        return nil, err
    }

    if len(res) < 1 {
        return nil, ErrNotFound
    }

	if len(res) > 1 {
		return nil, ErrMutlipleRecordsReturned
	}

    return res[0], nil
}

// Retrieve ...
func Retrieve(ctx context.Context, filter bson.D, db *mongo.Database) ([]*Dope, error) {
	iter, err := db.Collection(repo).Find(ctx, filter)
	if err != nil {
		return nil, err
	}
	defer iter.Close(ctx)

	items := make([]*Dope, 0)
	for iter.Next(ctx) {
		item := &Dope{}
		if err := iter.Decode(&item); err != nil {
			return nil, errors.Wrapf(err, "unable to decode record")
		}
		items = append(items, item)
	}

	return items, nil
}

// Update ...
func Update(ctx context.Context, user string, filter bson.D, items []*Dope, db *mongo.Database) error {
	// ensure the user provided a username in an attempt to audit
	if len(user) < 1 {
		return ErrInvalidUsername
	}

	log.WithFields(log.Fields{
		"user":        user,
		"action_type": "update",
	}).Infof("attempting to update %d records", len(items))

	count := int64(0)
	for _, item := range items {
		item.RecordInfo.Updated = &timestamp.Timestamp{Seconds: time.Now().Unix()}

		res, err := db.Collection(repo).ReplaceOne(ctx, filter, item)
		if res != nil {
			count += res.ModifiedCount
		}

		if err != nil {
			return errors.Wrapf(err, "update: expected %d - actually %d", len(items), count)
		}
	}

	if want, got := int64(len(items)), count; got < want {
		return errors.Wrapf(ErrIncompleteAction, "update: expected %d - actually %d", want, got)
	}

	log.WithFields(log.Fields{
		"user":        user,
		"action_type": "update",
	}).Infof("updated %d records", len(items))

	return nil
}

// Delete ...
func Delete(ctx context.Context, user string, items []*Dope, db *mongo.Database) error {
	// ensure the user provided a username in an attempt to audit
	if len(user) < 1 {
		return ErrInvalidUsername
	}

	ids := make([]string, len(items))
	for i, item := range items {
		if len(item.Id) < 1 {
			return ErrInvalidId
		}
		ids[i] = item.Id
	}

	log.WithFields(log.Fields{
		"user":        user,
		"action_type": "delete",
	}).Infof("attempting to delete %d records", len(items))

	res, err := db.Collection(repo).DeleteMany(ctx, bson.D{{"_id", bson.D{{"$in", ids}}}})
	if err != nil {
		return errors.Wrapf(err, "deletion: expected %d - actually %d", len(ids), res.DeletedCount)
	}

	if want, got := int64(len(ids)), res.DeletedCount; got < want {
		return errors.Wrapf(ErrIncompleteAction, "deletion: expected %d - actually %d", want, got)
	}

	log.WithFields(log.Fields{
		"user":        user,
		"action_type": "delete",
	}).Infof("deleted %d records", res.DeletedCount)

	return nil
}

dopeservice.go

package super

import (
	"context"

	"github.com/elliottpolk/super/config"

	"github.com/pkg/errors"
	"go.mongodb.org/mongo-driver/bson"
	"go.mongodb.org/mongo-driver/mongo"
)

type DopeServer struct {
	cmp    *config.Composition
	client *mongo.Client
}

func NewDopeServer(cmp *config.Composition, client *mongo.Client) DopeServiceServer {
	return &DopeServer{
		cmp:    cmp,
		client: client,
	}
}

func (s *DopeServer) Create(ctx context.Context, req *DopeRequest) (*Empty, error) {
	empty := &Empty{RequestId: req.RequestId}

	if s.client == nil {
		return empty, ErrNoMongoClient
	}

	client := s.client
	if err := client.UseSession(ctx, func(session mongo.SessionContext) error {
		defer session.EndSession(ctx)

		if err := Create(session, req.Payload, client.Database(repo)); err != nil {
			defer session.AbortTransaction(ctx)
			return err
		}

		return nil
	}); err != nil {
		return empty, err
	}
	return empty, nil
}

func (s *DopeServer) Retrieve(ctx context.Context, req *DopeRequest) (*DopeResponse, error) {
	if s.client == nil {
		return nil, ErrNoMongoClient
	}

	result := &DopeResponse{
		RequestId: req.RequestId,
	}

	client := s.client
	if err := client.UseSession(ctx, func(session mongo.SessionContext) error {
		defer session.EndSession(ctx)

		// retrieve 1 and return by ID if provided in request
		if id := req.Id; len(id) > 0 {
			item, err := RetrieveOne(ctx, id, client.Database(repo))
			if err != nil {
				return errors.Wrapf(err, "unable to retrieve record for id %s", id)
			}
			result.Payload = []*Dope{item}

			return nil
		}

		items, err := Retrieve(ctx, bson.D{}, client.Database(repo))
		if err != nil {
			return errors.Wrap(err, "unable to retrieve records")
		}
		result.Payload = items

		return nil
	}); err != nil {
		return nil, err
	}

	return result, nil
}

func (s *DopeServer) Update(ctx context.Context, req *DopeRequest) (*DopeResponse, error) {
	if s.client == nil {
		return nil, ErrNoMongoClient
	}

	result := &DopeResponse{
		RequestId: req.RequestId,
	}

	client := s.client
	if err := client.UseSession(ctx, func(session mongo.SessionContext) error {
		defer session.EndSession(ctx)

		if err := Update(session, req.Username, bson.D{}, req.Payload, client.Database(repo)); err != nil {
			defer session.AbortTransaction(ctx)
			return errors.Wrap(err, "unable to update records")
		}
		result.Payload = req.Payload

		return nil
	}); err != nil {
		return nil, err
	}

	return result, nil
}

func (s *DopeServer) Delete(ctx context.Context, req *DopeRequest) (*Empty, error) {
	empty := &Empty{RequestId: req.RequestId}

	if s.client == nil {
		return empty, ErrNoMongoClient
	}

	client := s.client
	if err := client.UseSession(ctx, func(session mongo.SessionContext) error {
		defer session.EndSession(ctx)

		if err := Delete(session, req.Username, req.Payload, client.Database(repo)); err != nil {
			defer session.AbortTransaction(ctx)
			return errors.Wrap(err, "unable to delete records")
		}

		return nil
	}); err != nil {
		return empty, err
	}

	return empty, nil
}

Finally, we'll now create the grpc/ and rest/ packages along with their respective server.go files. The rest/server.go bit of code is just a proxy to grpc/server.go. This allows for RESTful calls rather than forcing only gRPC.

package rest

import (
	"context"
	"fmt"
	"net/http"
	"os"
	"os/signal"
	"time"

	"github.com/elliottpolk/super"
	"github.com/elliottpolk/super/config"

	"github.com/grpc-ecosystem/grpc-gateway/runtime"
	"github.com/pkg/errors"
	log "github.com/sirupsen/logrus"
	"google.golang.org/grpc"
)

func Serve(ctx context.Context, comp *config.Composition) error {
	var (
		mux  = runtime.NewServeMux()
		opts = []grpc.DialOption{grpc.WithInsecure()}
	)

	_ctx, cancel := context.WithCancel(ctx)
	defer cancel()

	// register services
	if err := super.RegisterDopeServiceHandlerFromEndpoint(_ctx, mux, fmt.Sprintf(":%s", comp.Server.RpcPort), opts); err != nil {
		return errors.Wrap(err, "unable to register super service handler")
	}

	server := &http.Server{
		Addr:    fmt.Sprintf(":%s", comp.Server.HttpPort),
		Handler: mux,
	}

	// graceful shutdown
	c := make(chan os.Signal, 1)
	signal.Notify(c, os.Interrupt)
	go func() {
		for range c {
			// TODO: 
			// sig is a ^C, handle it
		}

		_, cancel := context.WithTimeout(_ctx, 5*time.Second)
		defer cancel()

		log.Infoln("shutting down HTTP/RESTful gateway...")
		if err := server.Shutdown(_ctx); err != nil {
			log.Error(err)
		}
	}()

	//  start HTTPS listener in a seperate go routine since it is a blocking func
	go func() {
		cert, key := comp.Server.TlsCert, comp.Server.TlsKey
		if len(cert) < 1 || len(key) < 1 {
			return // skip if no cert nor key
		}

		if _, err := os.Stat(cert); err != nil {
			log.Error(errors.Wrap(err, "unable to access TLS cert file"))
			return
		}

		if _, err := os.Stat(key); err != nil {
			log.Error(errors.Wrap(err, "unable to access TLS key file"))
			return
		}

		server := &http.Server{
			Addr:    fmt.Sprintf(":%s", comp.Server.HttpsPort),
			Handler: mux,
		}

		// graceful shutdown
		c := make(chan os.Signal, 1)
		signal.Notify(c, os.Interrupt)
		go func() {
			for range c {
				// TODO:
				// sig is a ^C, handle it
			}

			_, cancel := context.WithTimeout(_ctx, 5*time.Second)
			defer cancel()

			log.Infoln("shutting down HTTPS/RESTful gateway...")
			if err := server.Shutdown(_ctx); err != nil {
				log.Error(err)
			}
		}()

		log.Infoln("starting HTTPSRESTful gateway...")
		log.Fatal(server.ListenAndServeTLS(cert, key))
	}()

	log.Infoln("starting HTTP/RESTful gateway...")
	return server.ListenAndServe()
}

grpc/server.go

package grpc

import (
	"context"
	"fmt"
	"net"
	"os"
	"os/signal"
	"time"

	"github.com/elliottpolk/super"
	"github.com/elliottpolk/super/config"

	"github.com/pkg/errors"
	log "github.com/sirupsen/logrus"
	"google.golang.org/grpc"
	"go.mongodb.org/mongo-driver/mongo"
	"go.mongodb.org/mongo-driver/mongo/options"
	"go.mongodb.org/mongo-driver/mongo/readpref"
)

func Serve(ctx context.Context, comp *config.Composition) error {
	listener, err := net.Listen("tcp", fmt.Sprintf(":%s", comp.Server.RpcPort))
	if err != nil {
		return errors.Wrap(err, "unable to create tcp listener")
	}

	server := grpc.NewServer()

	ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
	defer cancel()

	client, err := mongo.Connect(ctx, options.Client().ApplyURI(comp.Db.ConnString()))
	if err != nil {
		return errors.Wrap(err, "unable to generate mongo client")
	}
	defer client.Disconnect(ctx)

	if err := client.Ping(ctx, readpref.Primary()); err != nil {
		return errors.Wrap(err, "unable to verify connection to mongo")
	}

	// register services
	super.RegisterDopeServiceServer(server, super.NewDopeServer(comp, client))

	// graceful shutdown
	c := make(chan os.Signal, 1)
	signal.Notify(c, os.Interrupt)
	go func() {
		// receiving an interrupt signal, similar to a 'Ctrl+C'
		for range c {
			log.Println("shutting down gRPC server...")
			server.GracefulStop()

			<-ctx.Done()
		}
	}()

	log.Println("starting gRPC server...")
	return server.Serve(listener)
}

Now that we've got all this code, we should have something that looks a little like below. Of course, now that I've gotten you to copy & paste all that (‘cause you know you do), you can just go here to checkout the sample repo.

$ tree
.
├── cmd
│   └── main.go
├── config
│   └── composition.go
├── dope.go
├── dope.pb.go
├── dopeservice.go
├── dopeservice.pb.go
├── dopeservice.pb.gw.go
├── error.go
├── grpc
│   └── server.go
├── proto
│   ├── dope.proto
│   ├── dopeservice.proto
│   └── record.proto
├── record.pb.go
└── rest
    └── server.go

5 directories, 14 files

Once you have the code, let's run it with the command go run cmd/main.go. You'll need a backing MongoDB for this. We'll be lazy for now and pull from Docker Hub. For this sample, you can run a bare insecure DB, but for the love of Bob, always use good practices when setting up a database (like securing it properly).

$ docker pull mongo
Using default tag: latest
latest: Pulling from library/mongo
9ff7e2e5f967: Pull complete
59856638ac9f: Pull complete
6f317d6d954b: Pull complete
a9dde5e2a643: Pull complete
815c6aedc001: Pull complete
8566b2594855: Pull complete
01c9fe451980: Pull complete
5c9e7bc12cea: Pull complete
c64dd2c4159a: Pull complete
6c9522757e83: Pull complete
7cedccbc13a9: Pull complete
29aec2f2353d: Pull complete
08bcfe00e506: Pull complete
Digest: sha256:6b8cefbef0e6c4f3d35ffbf546e77e18c9737b393de6f96dbf75e6ba0185d876
Status: Downloaded newer image for mongo:latest

$ docker run -d --name dopedb -p 27017:27017 mongo
3bee13d462954a51398352be59f543d78075842ff6f222c45bba47ffacb615aa

Conclusion

We've now got our base dope CRUD app ready to go. Coming up next, we'll tear it back down into templates and create our boilerplate code to generate code. We'll follow that with the writeup of the service that ties this all together and includes a beautiful bow of a UI.

Let me know on Twitter if you have any suggestions on how to improve this or if you'd like to see something a little different in the formatting. I struggled with this one as I didn't want it to be super long, but that's just how this worked out.