persia.embedding.optim

Module Contents

class persia.embedding.optim.Adagrad(lr=0.01, initial_accumulator_value=0.01, weight_decay=0, g_square_momentum=1, eps=1e-10, vectorwise_shared=False)

Bases: Optimizer

A wrapper to config the embedding-server Adagrad optimizer.

Parameters
  • lr (float) – learning rate.

  • initial_accumulator_value (float, optional) – initialization accumulator value for adagrad optimizer.

  • weight_decay (float, optional) – parameters L2 penalty factor.

  • g_square_momentum (float, optional) – factor of accumulator incremental.

  • eps (float, optional) – epsilon term to avoid divide zero.

  • vectorwise_shared (bool, optional) – whether to share optimizer status vectorwise of embedding.

class persia.embedding.optim.Adam(lr=0.001, betas=(0.9, 0.999), weight_decay=0, eps=1e-08)

Bases: Optimizer

A wrapper to config the embedding-server Adam optimizer.

Parameters
  • lr (float) – learning rate.

  • betas (tuple[float,float], optional) – calculate the running averages of gradient and its square.

  • weight_decay (float, optional) – parameters L2 penalty factor.

  • eps (float, optional) – epsilon to avoid div zero.

class persia.embedding.optim.Optimizer

Bases: abc.ABC

Base optimizer to configurate the embedding update behavior.

apply()

Register sparse optimizer to embedding server.

class persia.embedding.optim.SGD(lr, momentum=0.0, weight_decay=0.0)

Bases: Optimizer

A wrapper to config the embedding-server SGD optimizer.

Parameters
  • lr (float) – learning rate.

  • momentum (float, optional) – momentum factor.

  • weight_decay (float, optional) – parameters L2 penalty factor.