-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: setting an array element with a sequence. #53
Comments
I ran into the same issue. I think its an issue with sparse matrix input. https://stackoverflow.com/questions/46579164/use-sparse-input-in-keras-with-tensorflow |
Have you solve this problem? Epoch 1/200 Is this problem really caused by the inputs? I mean the inout array is not aligned? |
It seems to be an issue with the sparse=True in the Input layer. There are 2 possible solutions:
step1 def convert_sparse_matrix_to_sparse_tensor(X):
# got from https://stackoverflow.com/questions/40896157/scipy-sparse-csr-matrix-to-tensorflow-sparsetensor-mini-batch-gradient-descent
coo = X.tocoo()
indices = np.mat([coo.row, coo.col]).transpose()
return tf.SparseTensor(indices, coo.data, coo.shape)
# apply this before training
graph[1] = convert_sparse_matrix_to_sparse_tensor(graph[1]) step2 # adjust model.fit to use tf.Tensor (sparse in this case)
model.fit(graph, y_train, sample_weight=train_mask,
steps_per_epoch=1, epochs=1, shuffle=False, verbose=0) step3 # adjust model.predict also
preds = model.predict(graph, steps=1) I'm not sure these solutions don't affect the end result as I couldn't run the original code. |
your second met the error, and the stack is as follows,
|
It looks like you have already converted to tf.SparseTensor before passing to the convert function. I'm sharing a gist with the whole training code: https://gist.github.com/Falcatrua/fc4ed4d2cb33f08acf54bdf12c45d641 |
For the first solution, the accuracy is too low if you remove sparse=True Additionally, the second one failed. Stack:
|
Facing the same issue |
hello for me too i have the same problem , if any one solved it. |
Loading cora dataset...
Dataset has 2708 nodes, 5429 edges, 1433 features.
Using local pooling filters...
WARNING:tensorflow:From /Users/manohar/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
ValueError Traceback (most recent call last)
in ()
66 epochs=1,
67 shuffle=False,
---> 68 verbose=0)
69
70 # Predict on full dataset
~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
1237 steps_per_epoch=steps_per_epoch,
1238 validation_steps=validation_steps,
-> 1239 validation_freq=validation_freq)
1240
1241 def evaluate(self,
~/anaconda3/lib/python3.6/site-packages/keras/engine/training_arrays.py in fit_loop(model, fit_function, fit_inputs, out_labels, batch_size, epochs, verbose, callbacks, val_function, val_inputs, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq)
194 ins_batch[i] = ins_batch[i].toarray()
195
--> 196 outs = fit_function(ins_batch)
197 outs = to_list(outs)
198 for l, o in zip(out_labels, outs):
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/backend.py in call(self, inputs)
3275 tensor_type = dtypes_module.as_dtype(tensor.dtype)
3276 array_vals.append(np.asarray(value,
-> 3277 dtype=tensor_type.as_numpy_dtype))
3278
3279 if self.feed_dict:
~/anaconda3/lib/python3.6/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order)
83
84 """
---> 85 return array(a, dtype, copy=False, order=order)
86
87
ValueError: setting an array element with a sequence.
I am trying to run the code as is but getting this error. The code ran fine 2 weeks back, but I am having trouble now.
Do you think its tensorflow or keras versions?
The text was updated successfully, but these errors were encountered: