optflow.optimizers package#

Submodules#

optflow.optimizers.dp module#

optflow.optimizers.dp.inference_mdp(mdp_elements, policy, init_state_inputs: dict | None = None, inputs: dict | None = None)#
optflow.optimizers.dp.solve_mdp_with_dp(mdp_elements: dict, inputs: dict | None = None, params=None)#
optflow.optimizers.dp.to_hashable(s)#

optflow.optimizers.metaheuristics module#

optflow.optimizers.metaheuristics.hill_climbing(var_list, var_value: dict, obj_expr, constraints, sense, params: dict | None = None)#
optflow.optimizers.metaheuristics.solve(constraints: list, objective, sense, inputs=None, params: dict | None = None)#

Find the optimal variable values in a dataflow graph with all constraints satisfied and the objective minimize/maximized

Parameters:
  • constraints – a list of boolean nodes in the dataflow graph representing the constraints

  • objective – a scalar node in the dataflow graph representing the objective to be optimized

  • sense – flow.maximum or flow.minimum

  • inputs – data to be fed into the dataflow graph, should be a dict

  • params – see https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options

Returns:

optimal objective (the optimal value for variable x should be obtained by x.optimized_value)

optflow.optimizers.programming module#

optflow.optimizers.programming.parse_expr(expr, coef: float, d: dict)#
optflow.optimizers.programming.parse_expr_with_set(expr, coef: float, d: dict)#
optflow.optimizers.programming.solve(constraints: list, objective, sense, inputs=None, params: dict | None = None)#

Find the optimal variable values in a dataflow graph with all constraints satisfied and the objective minimize/maximized

Parameters:
  • constraints – a list of boolean nodes in the dataflow graph representing the constraints

  • objective – a scalar node in the dataflow graph representing the objective to be optimized

  • sense – flow.maximum or flow.minimum

  • inputs – data to be fed into the dataflow graph, should be a dict

  • params – see https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options

Returns:

optimal objective (the optimal value for variable x should be obtained by x.optimized_value)

optflow.optimizers.rl module#

class optflow.optimizers.rl.FullyConnectedNetwork(*args, **kwargs)#

Bases: Model

call(inputs, training=None, mask=None)#

Calls the model on new inputs and returns the outputs as tensors.

In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.

Args:

inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to

run the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be either a boolean tensor

or None (no mask). For more details, check the guide [here](https://www.tensorflow.org/guide/keras/masking_and_padding).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

class optflow.optimizers.rl.RLPointerNetwork(*args, **kwargs)#

Bases: Model

call(inputs, mode='policy', training=True)#

Calls the model on new inputs and returns the outputs as tensors.

In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.

Args:

inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to

run the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be either a boolean tensor

or None (no mask). For more details, check the guide [here](https://www.tensorflow.org/guide/keras/masking_and_padding).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

get_initial_state(batch_size)#
optflow.optimizers.rl.get_graph_node_attributes(g: Graph, e: Env)#
optflow.optimizers.rl.inference_mdp(mdp_elements, policy, policy_info, inputs)#
optflow.optimizers.rl.inference_policy(env: Env, policy, policy_info, state, extended_state=None, return_q_value=False)#
optflow.optimizers.rl.load_model(prob, save_path)#
optflow.optimizers.rl.train_mdp(mdp_elements: dict, inputs_generator: DataGenerator | None = None, save_path: str | None = None, params: dict | None = None)#

optflow.optimizers.search module#

class optflow.optimizers.search.MCTSNode(parent, t, action, info=None)#

Bases: object

optflow.optimizers.search.ma_mcts_policy(env: Env, agent_id: int, policy=None, policy_info=None, params: dict | None = None)#
optflow.optimizers.search.mcts_policy(env: Env, policy=None, policy_info=None, params: dict | None = None)#
optflow.optimizers.search.solve(prob: Problem, inputs=None, params: dict | None = None)#

Find the optimal variable values in a dataflow graph with all constraints satisfied and the objective minimize/maximized

Parameters:
Returns:

optimal objective (the optimal value for variable x should be obtained by x.optimized_value)

optflow.optimizers.search.solve_ma_mcts(prob: Problem, inputs: dict | None = None, params: dict | None = None)#
optflow.optimizers.search.solve_mcts(prob: Problem, inputs: dict | None = None, params: dict | None = None)#

Module contents#