Questa è la funzione di training sulla quale sto eseguendo il loop:
@tf.function
def train_step(optimizer, target_sample):
with tf.GradientTape() as tape:
loss = -tf.reduce_mean(diglm.weighted_log_prob(target_sample))
variables = tape.watched_variables()
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
E sto chiamando così la funzione nel loop:
`LR = 1e-3
NUM_EPOCHS = 100
learning_rate = tf.Variable(LR, trainable=False)
optimizer = tf.keras.optimizers.Adam(learning_rate)
loss = 0
for epoch in range(NUM_EPOCHS):
if epoch % 10 == 9:
print(f"Epoch n. {epoch+1}. Loss={loss}.")
for i in range(int(DATASET_SIZE/BATCH_SIZE)):
batch_label = y_data.sample(BATCH_SIZE, random_state=42)
batch_feature = data_train.sample(BATCH_SIZE, random_state=42)
loss = train_step(optimizer, tf.tuple(batch_label, batch_feature))`
Trovo il seguente errore:
`AttributeError Traceback (most recent call last)
in ()
13 batch_label = y_data.sample(BATCH_SIZE, random_state=42)
14 batch_feature = data_train.sample(BATCH_SIZE, random_state=42)
---> 15 loss = train_step(optimizer, tf.tuple(batch_label, batch_feature))
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
AttributeError: in user code:
File "<ipython-input-45-8c287debb608>", line 4, in train_step *
loss = -tf.reduce_mean(diglm.weighted_log_prob(target_sample))
File "/content/SpQR-Flow/SpQR-Flow/diglm.py", line 68, in weighted_log_prob *
lpp = self.log_prob_parts(value)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_probability/python/distributions/joint_distribution.py", line 579, in log_prob_parts **
self._map_measure_over_dists('log_prob', value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow_probability/python/distributions/joint_distribution.py", line 750, in _map_measure_over_dists
lambda dist, value, **_: ValueWithTrace(value=value, # pylint: disable=g-long-lambda
File "/usr/local/lib/python3.7/dist-packages/tensorflow_probability/python/distributions/joint_distribution.py", line 834, in _call_execute_model
flat_value = None if value is None else self._model_flatten(value)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_probability/python/distributions/joint_distribution_named.py", line 342, in _model_flatten
return tuple(getattr(xs, n) for n in self._dist_fn_name)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_probability/python/distributions/joint_distribution_named.py", line 342, in <genexpr>
return tuple(getattr(xs, n) for n in self._dist_fn_name)
AttributeError: 'Tensor' object has no attribute 'features'`
che secondo me dipende dal fatto che il seguente codice:
def weighted_log_prob(self, value, scaling_const=.1): lpp = self.log_prob_parts(value) return lpp["labels"] + scaling_const * lpp["features"]
invoca lpp come se fosse un dizionario, ma non è vero:
dalla documentazione su tesnorflow_probability.distributions.JointDistributionNamedAutoBached il return value è:
"a tuple of Tensors representing the log_prob for each distribution_fn evaluated at each corresponding value."