ValueError: setting an array element with a sequence Tensorflow and numpyElement-comparison NumPy matrixFind binary sequence in NumPy binary arrayNumPy array filter optimisationSlicing a big NumPy arraySimple Stateful LSTM example with arbitrary sequence lengthTraining MLP classifier with TensorFlow on notMNIST datasetVectorizing a sequence $x_i+1 = f(x_i)$ with NumPyExtending numpy array by replacing each element with a matrixFor-Loop with numpy array too slowSpearman correlations between Numpy array and every Pandas DataFrame row
What is this high flying aircraft over Pennsylvania?
Difference between shutdown options
What should be the ideal length of sentences in a blog post for ease of reading?
Sound waves in different octaves
How much do grades matter for a future academia position?
What is the meaning of the following sentence?
Pre-Employment Background Check With Consent For Future Checks
Deciphering cause of death?
Limit max CPU usage SQL SERVER with WSRM
Proving an identity involving cross products and coplanar vectors
Integral Notations in Quantum Mechanics
Echo with obfuscation
How to get directions in deep space?
Visualizing the difference curve in a 2D plot?
I'm just a whisper. Who am I?
PTIJ: Which Dr. Seuss books should one obtain?
Is there a distance limit for minecart tracks?
Do I have to know the General Relativity theory to understand the concept of inertial frame?
Why the "ls" command is showing the permissions of files in a FAT32 partition?
Can I cause damage to electrical appliances by unplugging them when they are turned on?
Is there anyway, I can have two passwords for my wi-fi
How to make money from a browser who sees 5 seconds into the future of any web page?
Using streams for a null-safe conversion from an array to list
How to preserve electronics (computers, iPads and phones) for hundreds of years
ValueError: setting an array element with a sequence Tensorflow and numpy
Element-comparison NumPy matrixFind binary sequence in NumPy binary arrayNumPy array filter optimisationSlicing a big NumPy arraySimple Stateful LSTM example with arbitrary sequence lengthTraining MLP classifier with TensorFlow on notMNIST datasetVectorizing a sequence $x_i+1 = f(x_i)$ with NumPyExtending numpy array by replacing each element with a matrixFor-Loop with numpy array too slowSpearman correlations between Numpy array and every Pandas DataFrame row
$begingroup$
Trying to run this code, which is a function I wrote myself:
def next_batch(batch_size):
label = [0, 1, 0, 0, 0]
X = []
Y = []
for i in range(0, batch_size):
rand = random.choice(os.listdir(mnist))
rand = mnist + rand
img = cv2.imread(str(rand), 0)
img = np.array(img)
img = img.ravel()
X.append(img)
Y.append(label)
X = np.array(X)
Y = np.array(Y)
return X, Y
Then I want to use the X and Y array for training purpose of my network.
I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down
def train(train_model=True):
"""
Used to train the autoencoder by passing in the necessary inputs.
:param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
:return: does not return anything
"""
with tf.variable_scope(tf.get_variable_scope()):
encoder_output = encoder(x_input)
# Concat class label and the encoder output
decoder_input = tf.concat([y_input, encoder_output], 1)
decoder_output = decoder(decoder_input)
with tf.variable_scope(tf.get_variable_scope()):
d_real = discriminator(real_distribution)
d_fake = discriminator(encoder_output, reuse=True)
with tf.variable_scope(tf.get_variable_scope()):
decoder_image = decoder(manual_decoder_input, reuse=True)
# Autoencoder loss
autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))
# Discriminator Loss
dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
dc_loss = dc_loss_fake + dc_loss_real
# Generator loss
generator_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))
all_variables = tf.trainable_variables()
dc_var = [var for var in all_variables if 'dc_' in var.name]
en_var = [var for var in all_variables if 'e_' in var.name]
# Optimizers
autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(autoencoder_loss)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(dc_loss, var_list=dc_var)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(generator_loss, var_list=en_var)
init = tf.global_variables_initializer()
# Reshape images to display them
input_images = tf.reshape(x_input, [-1, 368, 432, 1])
generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])
# Tensorboard visualization
tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
tf.summary.histogram(name='Real Distribution', values=real_distribution)
tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
summary_op = tf.summary.merge_all()
# Saving the model
saver = tf.train.Saver()
step = 0
with tf.Session() as sess:
if train_model:
tensorboard_path, saved_model_path, log_path = form_results()
sess.run(init)
writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
for i in range(n_epochs):
# print(n_epochs)
n_batches = int(10000 / batch_size)
print("------------------Epoch /------------------".format(i, n_epochs))
for b in range(1, n_batches+1):
# print("In the loop")
z_real_dist = np.random.randn(batch_size, z_dim) * 5.
batch_x, batch_y = next_batch(batch_size)
# print("Created the batches")
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
print("batch_x", batch_x)
print("x_input:", x_input)
print("x_target:", x_target)
print("y_input:", y_input)
sess.run(discriminator_optimizer,
feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
# print("setup the session")
if b % 50 == 0:
a_loss, d_loss, g_loss, summary = sess.run(
[autoencoder_loss, dc_loss, generator_loss, summary_op],
feed_dict=x_input: batch_x, x_target: batch_x,
real_distribution: z_real_dist, y_input: batch_y)
writer.add_summary(summary, global_step=step)
print("Epoch: , iteration: ".format(i, b))
print("Autoencoder Loss: ".format(a_loss))
print("Discriminator Loss: ".format(d_loss))
print("Generator Loss: ".format(g_loss))
with open(log_path + '/log.txt', 'a') as log:
log.write("Epoch: , iteration: n".format(i, b))
log.write("Autoencoder Loss: n".format(a_loss))
log.write("Discriminator Loss: n".format(d_loss))
log.write("Generator Loss: n".format(g_loss))
step += 1
saver.save(sess, save_path=saved_model_path, global_step=step)
else:
# Get the latest results folder
all_results = os.listdir(results_path)
all_results.sort()
saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
all_results[-1] + '/Saved_models/'))
generate_image_grid(sess, op=decoder_image)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
parser.add_argument('--train', '-t', type=bool, default=True,
help='Set to True to train a new model, False to load weights and display image grid')
args = parser.parse_args()
train(train_model=args.train)
Getting this error message:
Traceback (most recent call last):
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
train(train_model=args.train)
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
Process finished with exit code 1
I really don't get this error. Can somebody help me out with this?
python python-3.x numpy neural-network tensorflow
New contributor
$endgroup$
add a comment |
$begingroup$
Trying to run this code, which is a function I wrote myself:
def next_batch(batch_size):
label = [0, 1, 0, 0, 0]
X = []
Y = []
for i in range(0, batch_size):
rand = random.choice(os.listdir(mnist))
rand = mnist + rand
img = cv2.imread(str(rand), 0)
img = np.array(img)
img = img.ravel()
X.append(img)
Y.append(label)
X = np.array(X)
Y = np.array(Y)
return X, Y
Then I want to use the X and Y array for training purpose of my network.
I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down
def train(train_model=True):
"""
Used to train the autoencoder by passing in the necessary inputs.
:param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
:return: does not return anything
"""
with tf.variable_scope(tf.get_variable_scope()):
encoder_output = encoder(x_input)
# Concat class label and the encoder output
decoder_input = tf.concat([y_input, encoder_output], 1)
decoder_output = decoder(decoder_input)
with tf.variable_scope(tf.get_variable_scope()):
d_real = discriminator(real_distribution)
d_fake = discriminator(encoder_output, reuse=True)
with tf.variable_scope(tf.get_variable_scope()):
decoder_image = decoder(manual_decoder_input, reuse=True)
# Autoencoder loss
autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))
# Discriminator Loss
dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
dc_loss = dc_loss_fake + dc_loss_real
# Generator loss
generator_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))
all_variables = tf.trainable_variables()
dc_var = [var for var in all_variables if 'dc_' in var.name]
en_var = [var for var in all_variables if 'e_' in var.name]
# Optimizers
autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(autoencoder_loss)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(dc_loss, var_list=dc_var)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(generator_loss, var_list=en_var)
init = tf.global_variables_initializer()
# Reshape images to display them
input_images = tf.reshape(x_input, [-1, 368, 432, 1])
generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])
# Tensorboard visualization
tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
tf.summary.histogram(name='Real Distribution', values=real_distribution)
tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
summary_op = tf.summary.merge_all()
# Saving the model
saver = tf.train.Saver()
step = 0
with tf.Session() as sess:
if train_model:
tensorboard_path, saved_model_path, log_path = form_results()
sess.run(init)
writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
for i in range(n_epochs):
# print(n_epochs)
n_batches = int(10000 / batch_size)
print("------------------Epoch /------------------".format(i, n_epochs))
for b in range(1, n_batches+1):
# print("In the loop")
z_real_dist = np.random.randn(batch_size, z_dim) * 5.
batch_x, batch_y = next_batch(batch_size)
# print("Created the batches")
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
print("batch_x", batch_x)
print("x_input:", x_input)
print("x_target:", x_target)
print("y_input:", y_input)
sess.run(discriminator_optimizer,
feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
# print("setup the session")
if b % 50 == 0:
a_loss, d_loss, g_loss, summary = sess.run(
[autoencoder_loss, dc_loss, generator_loss, summary_op],
feed_dict=x_input: batch_x, x_target: batch_x,
real_distribution: z_real_dist, y_input: batch_y)
writer.add_summary(summary, global_step=step)
print("Epoch: , iteration: ".format(i, b))
print("Autoencoder Loss: ".format(a_loss))
print("Discriminator Loss: ".format(d_loss))
print("Generator Loss: ".format(g_loss))
with open(log_path + '/log.txt', 'a') as log:
log.write("Epoch: , iteration: n".format(i, b))
log.write("Autoencoder Loss: n".format(a_loss))
log.write("Discriminator Loss: n".format(d_loss))
log.write("Generator Loss: n".format(g_loss))
step += 1
saver.save(sess, save_path=saved_model_path, global_step=step)
else:
# Get the latest results folder
all_results = os.listdir(results_path)
all_results.sort()
saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
all_results[-1] + '/Saved_models/'))
generate_image_grid(sess, op=decoder_image)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
parser.add_argument('--train', '-t', type=bool, default=True,
help='Set to True to train a new model, False to load weights and display image grid')
args = parser.parse_args()
train(train_model=args.train)
Getting this error message:
Traceback (most recent call last):
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
train(train_model=args.train)
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
Process finished with exit code 1
I really don't get this error. Can somebody help me out with this?
python python-3.x numpy neural-network tensorflow
New contributor
$endgroup$
add a comment |
$begingroup$
Trying to run this code, which is a function I wrote myself:
def next_batch(batch_size):
label = [0, 1, 0, 0, 0]
X = []
Y = []
for i in range(0, batch_size):
rand = random.choice(os.listdir(mnist))
rand = mnist + rand
img = cv2.imread(str(rand), 0)
img = np.array(img)
img = img.ravel()
X.append(img)
Y.append(label)
X = np.array(X)
Y = np.array(Y)
return X, Y
Then I want to use the X and Y array for training purpose of my network.
I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down
def train(train_model=True):
"""
Used to train the autoencoder by passing in the necessary inputs.
:param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
:return: does not return anything
"""
with tf.variable_scope(tf.get_variable_scope()):
encoder_output = encoder(x_input)
# Concat class label and the encoder output
decoder_input = tf.concat([y_input, encoder_output], 1)
decoder_output = decoder(decoder_input)
with tf.variable_scope(tf.get_variable_scope()):
d_real = discriminator(real_distribution)
d_fake = discriminator(encoder_output, reuse=True)
with tf.variable_scope(tf.get_variable_scope()):
decoder_image = decoder(manual_decoder_input, reuse=True)
# Autoencoder loss
autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))
# Discriminator Loss
dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
dc_loss = dc_loss_fake + dc_loss_real
# Generator loss
generator_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))
all_variables = tf.trainable_variables()
dc_var = [var for var in all_variables if 'dc_' in var.name]
en_var = [var for var in all_variables if 'e_' in var.name]
# Optimizers
autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(autoencoder_loss)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(dc_loss, var_list=dc_var)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(generator_loss, var_list=en_var)
init = tf.global_variables_initializer()
# Reshape images to display them
input_images = tf.reshape(x_input, [-1, 368, 432, 1])
generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])
# Tensorboard visualization
tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
tf.summary.histogram(name='Real Distribution', values=real_distribution)
tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
summary_op = tf.summary.merge_all()
# Saving the model
saver = tf.train.Saver()
step = 0
with tf.Session() as sess:
if train_model:
tensorboard_path, saved_model_path, log_path = form_results()
sess.run(init)
writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
for i in range(n_epochs):
# print(n_epochs)
n_batches = int(10000 / batch_size)
print("------------------Epoch /------------------".format(i, n_epochs))
for b in range(1, n_batches+1):
# print("In the loop")
z_real_dist = np.random.randn(batch_size, z_dim) * 5.
batch_x, batch_y = next_batch(batch_size)
# print("Created the batches")
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
print("batch_x", batch_x)
print("x_input:", x_input)
print("x_target:", x_target)
print("y_input:", y_input)
sess.run(discriminator_optimizer,
feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
# print("setup the session")
if b % 50 == 0:
a_loss, d_loss, g_loss, summary = sess.run(
[autoencoder_loss, dc_loss, generator_loss, summary_op],
feed_dict=x_input: batch_x, x_target: batch_x,
real_distribution: z_real_dist, y_input: batch_y)
writer.add_summary(summary, global_step=step)
print("Epoch: , iteration: ".format(i, b))
print("Autoencoder Loss: ".format(a_loss))
print("Discriminator Loss: ".format(d_loss))
print("Generator Loss: ".format(g_loss))
with open(log_path + '/log.txt', 'a') as log:
log.write("Epoch: , iteration: n".format(i, b))
log.write("Autoencoder Loss: n".format(a_loss))
log.write("Discriminator Loss: n".format(d_loss))
log.write("Generator Loss: n".format(g_loss))
step += 1
saver.save(sess, save_path=saved_model_path, global_step=step)
else:
# Get the latest results folder
all_results = os.listdir(results_path)
all_results.sort()
saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
all_results[-1] + '/Saved_models/'))
generate_image_grid(sess, op=decoder_image)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
parser.add_argument('--train', '-t', type=bool, default=True,
help='Set to True to train a new model, False to load weights and display image grid')
args = parser.parse_args()
train(train_model=args.train)
Getting this error message:
Traceback (most recent call last):
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
train(train_model=args.train)
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
Process finished with exit code 1
I really don't get this error. Can somebody help me out with this?
python python-3.x numpy neural-network tensorflow
New contributor
$endgroup$
Trying to run this code, which is a function I wrote myself:
def next_batch(batch_size):
label = [0, 1, 0, 0, 0]
X = []
Y = []
for i in range(0, batch_size):
rand = random.choice(os.listdir(mnist))
rand = mnist + rand
img = cv2.imread(str(rand), 0)
img = np.array(img)
img = img.ravel()
X.append(img)
Y.append(label)
X = np.array(X)
Y = np.array(Y)
return X, Y
Then I want to use the X and Y array for training purpose of my network.
I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down
def train(train_model=True):
"""
Used to train the autoencoder by passing in the necessary inputs.
:param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
:return: does not return anything
"""
with tf.variable_scope(tf.get_variable_scope()):
encoder_output = encoder(x_input)
# Concat class label and the encoder output
decoder_input = tf.concat([y_input, encoder_output], 1)
decoder_output = decoder(decoder_input)
with tf.variable_scope(tf.get_variable_scope()):
d_real = discriminator(real_distribution)
d_fake = discriminator(encoder_output, reuse=True)
with tf.variable_scope(tf.get_variable_scope()):
decoder_image = decoder(manual_decoder_input, reuse=True)
# Autoencoder loss
autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))
# Discriminator Loss
dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
dc_loss = dc_loss_fake + dc_loss_real
# Generator loss
generator_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))
all_variables = tf.trainable_variables()
dc_var = [var for var in all_variables if 'dc_' in var.name]
en_var = [var for var in all_variables if 'e_' in var.name]
# Optimizers
autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(autoencoder_loss)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(dc_loss, var_list=dc_var)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(generator_loss, var_list=en_var)
init = tf.global_variables_initializer()
# Reshape images to display them
input_images = tf.reshape(x_input, [-1, 368, 432, 1])
generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])
# Tensorboard visualization
tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
tf.summary.histogram(name='Real Distribution', values=real_distribution)
tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
summary_op = tf.summary.merge_all()
# Saving the model
saver = tf.train.Saver()
step = 0
with tf.Session() as sess:
if train_model:
tensorboard_path, saved_model_path, log_path = form_results()
sess.run(init)
writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
for i in range(n_epochs):
# print(n_epochs)
n_batches = int(10000 / batch_size)
print("------------------Epoch /------------------".format(i, n_epochs))
for b in range(1, n_batches+1):
# print("In the loop")
z_real_dist = np.random.randn(batch_size, z_dim) * 5.
batch_x, batch_y = next_batch(batch_size)
# print("Created the batches")
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
print("batch_x", batch_x)
print("x_input:", x_input)
print("x_target:", x_target)
print("y_input:", y_input)
sess.run(discriminator_optimizer,
feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
# print("setup the session")
if b % 50 == 0:
a_loss, d_loss, g_loss, summary = sess.run(
[autoencoder_loss, dc_loss, generator_loss, summary_op],
feed_dict=x_input: batch_x, x_target: batch_x,
real_distribution: z_real_dist, y_input: batch_y)
writer.add_summary(summary, global_step=step)
print("Epoch: , iteration: ".format(i, b))
print("Autoencoder Loss: ".format(a_loss))
print("Discriminator Loss: ".format(d_loss))
print("Generator Loss: ".format(g_loss))
with open(log_path + '/log.txt', 'a') as log:
log.write("Epoch: , iteration: n".format(i, b))
log.write("Autoencoder Loss: n".format(a_loss))
log.write("Discriminator Loss: n".format(d_loss))
log.write("Generator Loss: n".format(g_loss))
step += 1
saver.save(sess, save_path=saved_model_path, global_step=step)
else:
# Get the latest results folder
all_results = os.listdir(results_path)
all_results.sort()
saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
all_results[-1] + '/Saved_models/'))
generate_image_grid(sess, op=decoder_image)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
parser.add_argument('--train', '-t', type=bool, default=True,
help='Set to True to train a new model, False to load weights and display image grid')
args = parser.parse_args()
train(train_model=args.train)
Getting this error message:
Traceback (most recent call last):
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
train(train_model=args.train)
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
Process finished with exit code 1
I really don't get this error. Can somebody help me out with this?
python python-3.x numpy neural-network tensorflow
python python-3.x numpy neural-network tensorflow
New contributor
New contributor
New contributor
asked 4 mins ago
FreddyGumpFreddyGump
11
11
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
);
);
, "mathjax-editing");
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "196"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f215906%2fvalueerror-setting-an-array-element-with-a-sequence-tensorflow-and-numpy%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.
FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.
FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.
FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Code Review Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f215906%2fvalueerror-setting-an-array-element-with-a-sequence-tensorflow-and-numpy%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown