tree: b2c764e068549ccbb74ba59446b082f4b6906630 [path history] [tgz]
  1. apiv2/
  2. internal/
  3. logadmin/
  4. .release-please-manifest.json
  6. doc.go
  7. examples_test.go
  8. go.mod
  9. go.sum
  10. instrumentation.go
  11. loggeroption.go
  12. logging.go
  13. logging_test.go
  14. logging_unexported_test.go
  16. release-please-config.json
  17. resource.go
  18. resource_test.go

Cloud Logging Go Reference

For an interactive tutorial on using the client library in a Go application, click Guide Me.

Example Usage

First create a logging.Client to use throughout your application: [snip]:# (logging-1)

ctx := context.Background()
client, err := logging.NewClient(ctx, "my-project")
if err != nil {
   // TODO: Handle error.

Usually, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Cloud Logging service. [snip]:# (logging-2)

logger := client.Logger("my-log")
logger.Log(logging.Entry{Payload: "something happened!"})

If you need to write a critical log entry use synchronous ingestion method. [snip]:# (logging-3)

logger := client.Logger("my-log")
logger.LogSync(context.Background(), logging.Entry{Payload: "something happened!"})

Close your client before your program exits, to flush any buffered log entries. [snip]:# (logging-4)

err = client.Close()
if err != nil {
   // TODO: Handle error.

Logger configuration options

Creating a Logger using logging.Logger accept configuration LoggerOption arguments. The following options are supported:

Configuration optionArgumentsDescription
CommonLabelsmap[string]stringThe set of labels that will be ingested for all log entries ingested by Logger.
ConcurrentWriteLimitintNumber of parallel goroutine the Logger will use to ingest logs asynchronously. High number of routines may exhaust API quota. The default is 1.
DelayThresholdtime.DurationMaximum time a log entry is buffered on client before being ingested. The default is 1 second.
EntryCountThresholdintMaximum number of log entries to be buffered on client before being ingested. The default is 1000.
EntryByteThresholdintMaximum size in bytes of log entries to be buffered on client before being ingested. The default is 8MiB.
EntryByteLimitintMaximum size in bytes of the single write call to ingest log entries. If EntryByteLimit is smaller than EntryByteThreshold, the latter has no effect. The default is zero, meaning there is no limit.
BufferedByteLimitintMaximum number of bytes that the Logger will keep in memory before returning ErrOverflow. This option limits the total memory consumption of the Logger (but note that each Logger has its own, separate limit). It is possible to reach BufferedByteLimit even if it is larger than EntryByteThreshold or EntryByteLimit, because calls triggered by the latter two options may be enqueued (and hence occupying memory) while new log entries are being added.
ContextFuncfunc() (ctx context.Context, afterCall func())Callback function to be called to obtain context.Context during async log ingestion.
SourceLocationPopulationOne of logging.DoNotPopulateSourceLocation, logging.PopulateSourceLocationForDebugEntries or logging.AlwaysPopulateSourceLocationControls auto-population of the logging.Entry.SoourceLocation field when ingesting log entries. Allows to disable population of source location info, allowing it only for log entries at Debug severity or enable it for all log entries. Enabling it for all entries may result in degradation in performance. Use logging_test.BenchmarkSourceLocationPopulation to test performance with and without the option. The default is set to logging.DoNotPopulateSourceLocation.
PartialSuccessMake each write call to Logging service with partialSuccess flag set. The default is to make calls without setting the flag.
RedirectAsJSONio.WriterConverts log entries to Jsonified one line string according to the structured logging format and writes it to provided io.Writer. Users should use this option with os.Stdout and os.Stderr to leverage the out-of-process ingestion of logs using logging agents that are deployed in Cloud Logging environments.