ValueError: setting an array element with a sequence Tensorflow and numpyElement-comparison NumPy matrixFind binary sequence in NumPy binary arrayNumPy array filter optimisationSlicing a big NumPy arraySimple Stateful LSTM example with arbitrary sequence lengthTraining MLP classifier with TensorFlow on notMNIST datasetVectorizing a sequence $x_i+1 = f(x_i)$ with NumPyExtending numpy array by replacing each element with a matrixFor-Loop with numpy array too slowSpearman correlations between Numpy array and every Pandas DataFrame row

What is this high flying aircraft over Pennsylvania?

Difference between shutdown options

What should be the ideal length of sentences in a blog post for ease of reading?

Sound waves in different octaves

How much do grades matter for a future academia position?

What is the meaning of the following sentence?

Pre-Employment Background Check With Consent For Future Checks

Deciphering cause of death?

Limit max CPU usage SQL SERVER with WSRM

Proving an identity involving cross products and coplanar vectors

Integral Notations in Quantum Mechanics

Echo with obfuscation

How to get directions in deep space?

Visualizing the difference curve in a 2D plot?

I'm just a whisper. Who am I?

PTIJ: Which Dr. Seuss books should one obtain?

Is there a distance limit for minecart tracks?

Do I have to know the General Relativity theory to understand the concept of inertial frame?

Why the "ls" command is showing the permissions of files in a FAT32 partition?

Can I cause damage to electrical appliances by unplugging them when they are turned on?

Is there anyway, I can have two passwords for my wi-fi

How to make money from a browser who sees 5 seconds into the future of any web page?

Using streams for a null-safe conversion from an array to list

How to preserve electronics (computers, iPads and phones) for hundreds of years



ValueError: setting an array element with a sequence Tensorflow and numpy


Element-comparison NumPy matrixFind binary sequence in NumPy binary arrayNumPy array filter optimisationSlicing a big NumPy arraySimple Stateful LSTM example with arbitrary sequence lengthTraining MLP classifier with TensorFlow on notMNIST datasetVectorizing a sequence $x_i+1 = f(x_i)$ with NumPyExtending numpy array by replacing each element with a matrixFor-Loop with numpy array too slowSpearman correlations between Numpy array and every Pandas DataFrame row













0












$begingroup$


Trying to run this code, which is a function I wrote myself:



def next_batch(batch_size):
label = [0, 1, 0, 0, 0]
X = []
Y = []
for i in range(0, batch_size):
rand = random.choice(os.listdir(mnist))
rand = mnist + rand
img = cv2.imread(str(rand), 0)
img = np.array(img)
img = img.ravel()
X.append(img)
Y.append(label)
X = np.array(X)
Y = np.array(Y)
return X, Y


Then I want to use the X and Y array for training purpose of my network.
I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



def train(train_model=True):
"""
Used to train the autoencoder by passing in the necessary inputs.
:param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
:return: does not return anything
"""
with tf.variable_scope(tf.get_variable_scope()):
encoder_output = encoder(x_input)
# Concat class label and the encoder output
decoder_input = tf.concat([y_input, encoder_output], 1)
decoder_output = decoder(decoder_input)

with tf.variable_scope(tf.get_variable_scope()):
d_real = discriminator(real_distribution)
d_fake = discriminator(encoder_output, reuse=True)

with tf.variable_scope(tf.get_variable_scope()):
decoder_image = decoder(manual_decoder_input, reuse=True)

# Autoencoder loss
autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

# Discriminator Loss
dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
dc_loss = dc_loss_fake + dc_loss_real

# Generator loss
generator_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

all_variables = tf.trainable_variables()
dc_var = [var for var in all_variables if 'dc_' in var.name]
en_var = [var for var in all_variables if 'e_' in var.name]

# Optimizers
autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(autoencoder_loss)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(dc_loss, var_list=dc_var)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(generator_loss, var_list=en_var)

init = tf.global_variables_initializer()

# Reshape images to display them
input_images = tf.reshape(x_input, [-1, 368, 432, 1])
generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

# Tensorboard visualization
tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
tf.summary.histogram(name='Real Distribution', values=real_distribution)
tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
summary_op = tf.summary.merge_all()

# Saving the model
saver = tf.train.Saver()
step = 0
with tf.Session() as sess:
if train_model:
tensorboard_path, saved_model_path, log_path = form_results()
sess.run(init)
writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
for i in range(n_epochs):
# print(n_epochs)
n_batches = int(10000 / batch_size)
print("------------------Epoch /------------------".format(i, n_epochs))
for b in range(1, n_batches+1):
# print("In the loop")
z_real_dist = np.random.randn(batch_size, z_dim) * 5.
batch_x, batch_y = next_batch(batch_size)
# print("Created the batches")
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
print("batch_x", batch_x)
print("x_input:", x_input)
print("x_target:", x_target)
print("y_input:", y_input)
sess.run(discriminator_optimizer,
feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
# print("setup the session")
if b % 50 == 0:
a_loss, d_loss, g_loss, summary = sess.run(
[autoencoder_loss, dc_loss, generator_loss, summary_op],
feed_dict=x_input: batch_x, x_target: batch_x,
real_distribution: z_real_dist, y_input: batch_y)
writer.add_summary(summary, global_step=step)
print("Epoch: , iteration: ".format(i, b))
print("Autoencoder Loss: ".format(a_loss))
print("Discriminator Loss: ".format(d_loss))
print("Generator Loss: ".format(g_loss))
with open(log_path + '/log.txt', 'a') as log:
log.write("Epoch: , iteration: n".format(i, b))
log.write("Autoencoder Loss: n".format(a_loss))
log.write("Discriminator Loss: n".format(d_loss))
log.write("Generator Loss: n".format(g_loss))
step += 1

saver.save(sess, save_path=saved_model_path, global_step=step)
else:
# Get the latest results folder
all_results = os.listdir(results_path)
all_results.sort()
saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
all_results[-1] + '/Saved_models/'))
generate_image_grid(sess, op=decoder_image)


if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
parser.add_argument('--train', '-t', type=bool, default=True,
help='Set to True to train a new model, False to load weights and display image grid')
args = parser.parse_args()
train(train_model=args.train)


Getting this error message:




Traceback (most recent call last):
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
train(train_model=args.train)
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)



ValueError: setting an array element with a sequence.



Process finished with exit code 1




I really don't get this error. Can somebody help me out with this?









share







New contributor




FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    0












    $begingroup$


    Trying to run this code, which is a function I wrote myself:



    def next_batch(batch_size):
    label = [0, 1, 0, 0, 0]
    X = []
    Y = []
    for i in range(0, batch_size):
    rand = random.choice(os.listdir(mnist))
    rand = mnist + rand
    img = cv2.imread(str(rand), 0)
    img = np.array(img)
    img = img.ravel()
    X.append(img)
    Y.append(label)
    X = np.array(X)
    Y = np.array(Y)
    return X, Y


    Then I want to use the X and Y array for training purpose of my network.
    I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



    def train(train_model=True):
    """
    Used to train the autoencoder by passing in the necessary inputs.
    :param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
    :return: does not return anything
    """
    with tf.variable_scope(tf.get_variable_scope()):
    encoder_output = encoder(x_input)
    # Concat class label and the encoder output
    decoder_input = tf.concat([y_input, encoder_output], 1)
    decoder_output = decoder(decoder_input)

    with tf.variable_scope(tf.get_variable_scope()):
    d_real = discriminator(real_distribution)
    d_fake = discriminator(encoder_output, reuse=True)

    with tf.variable_scope(tf.get_variable_scope()):
    decoder_image = decoder(manual_decoder_input, reuse=True)

    # Autoencoder loss
    autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

    # Discriminator Loss
    dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
    dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
    dc_loss = dc_loss_fake + dc_loss_real

    # Generator loss
    generator_loss = tf.reduce_mean(
    tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

    all_variables = tf.trainable_variables()
    dc_var = [var for var in all_variables if 'dc_' in var.name]
    en_var = [var for var in all_variables if 'e_' in var.name]

    # Optimizers
    autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
    beta1=beta1).minimize(autoencoder_loss)
    discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
    beta1=beta1).minimize(dc_loss, var_list=dc_var)
    generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
    beta1=beta1).minimize(generator_loss, var_list=en_var)

    init = tf.global_variables_initializer()

    # Reshape images to display them
    input_images = tf.reshape(x_input, [-1, 368, 432, 1])
    generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

    # Tensorboard visualization
    tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
    tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
    tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
    tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
    tf.summary.histogram(name='Real Distribution', values=real_distribution)
    tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
    tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
    summary_op = tf.summary.merge_all()

    # Saving the model
    saver = tf.train.Saver()
    step = 0
    with tf.Session() as sess:
    if train_model:
    tensorboard_path, saved_model_path, log_path = form_results()
    sess.run(init)
    writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
    for i in range(n_epochs):
    # print(n_epochs)
    n_batches = int(10000 / batch_size)
    print("------------------Epoch /------------------".format(i, n_epochs))
    for b in range(1, n_batches+1):
    # print("In the loop")
    z_real_dist = np.random.randn(batch_size, z_dim) * 5.
    batch_x, batch_y = next_batch(batch_size)
    # print("Created the batches")
    sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
    print("batch_x", batch_x)
    print("x_input:", x_input)
    print("x_target:", x_target)
    print("y_input:", y_input)
    sess.run(discriminator_optimizer,
    feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
    sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
    # print("setup the session")
    if b % 50 == 0:
    a_loss, d_loss, g_loss, summary = sess.run(
    [autoencoder_loss, dc_loss, generator_loss, summary_op],
    feed_dict=x_input: batch_x, x_target: batch_x,
    real_distribution: z_real_dist, y_input: batch_y)
    writer.add_summary(summary, global_step=step)
    print("Epoch: , iteration: ".format(i, b))
    print("Autoencoder Loss: ".format(a_loss))
    print("Discriminator Loss: ".format(d_loss))
    print("Generator Loss: ".format(g_loss))
    with open(log_path + '/log.txt', 'a') as log:
    log.write("Epoch: , iteration: n".format(i, b))
    log.write("Autoencoder Loss: n".format(a_loss))
    log.write("Discriminator Loss: n".format(d_loss))
    log.write("Generator Loss: n".format(g_loss))
    step += 1

    saver.save(sess, save_path=saved_model_path, global_step=step)
    else:
    # Get the latest results folder
    all_results = os.listdir(results_path)
    all_results.sort()
    saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
    all_results[-1] + '/Saved_models/'))
    generate_image_grid(sess, op=decoder_image)


    if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
    parser.add_argument('--train', '-t', type=bool, default=True,
    help='Set to True to train a new model, False to load weights and display image grid')
    args = parser.parse_args()
    train(train_model=args.train)


    Getting this error message:




    Traceback (most recent call last):
    File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
    train(train_model=args.train)
    File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
    sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
    File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
    File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
    File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
    return array(a, dtype, copy=False, order=order)



    ValueError: setting an array element with a sequence.



    Process finished with exit code 1




    I really don't get this error. Can somebody help me out with this?









    share







    New contributor




    FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      0












      0








      0





      $begingroup$


      Trying to run this code, which is a function I wrote myself:



      def next_batch(batch_size):
      label = [0, 1, 0, 0, 0]
      X = []
      Y = []
      for i in range(0, batch_size):
      rand = random.choice(os.listdir(mnist))
      rand = mnist + rand
      img = cv2.imread(str(rand), 0)
      img = np.array(img)
      img = img.ravel()
      X.append(img)
      Y.append(label)
      X = np.array(X)
      Y = np.array(Y)
      return X, Y


      Then I want to use the X and Y array for training purpose of my network.
      I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



      def train(train_model=True):
      """
      Used to train the autoencoder by passing in the necessary inputs.
      :param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
      :return: does not return anything
      """
      with tf.variable_scope(tf.get_variable_scope()):
      encoder_output = encoder(x_input)
      # Concat class label and the encoder output
      decoder_input = tf.concat([y_input, encoder_output], 1)
      decoder_output = decoder(decoder_input)

      with tf.variable_scope(tf.get_variable_scope()):
      d_real = discriminator(real_distribution)
      d_fake = discriminator(encoder_output, reuse=True)

      with tf.variable_scope(tf.get_variable_scope()):
      decoder_image = decoder(manual_decoder_input, reuse=True)

      # Autoencoder loss
      autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

      # Discriminator Loss
      dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
      dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
      dc_loss = dc_loss_fake + dc_loss_real

      # Generator loss
      generator_loss = tf.reduce_mean(
      tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

      all_variables = tf.trainable_variables()
      dc_var = [var for var in all_variables if 'dc_' in var.name]
      en_var = [var for var in all_variables if 'e_' in var.name]

      # Optimizers
      autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(autoencoder_loss)
      discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(dc_loss, var_list=dc_var)
      generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(generator_loss, var_list=en_var)

      init = tf.global_variables_initializer()

      # Reshape images to display them
      input_images = tf.reshape(x_input, [-1, 368, 432, 1])
      generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

      # Tensorboard visualization
      tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
      tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
      tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
      tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
      tf.summary.histogram(name='Real Distribution', values=real_distribution)
      tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
      tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
      summary_op = tf.summary.merge_all()

      # Saving the model
      saver = tf.train.Saver()
      step = 0
      with tf.Session() as sess:
      if train_model:
      tensorboard_path, saved_model_path, log_path = form_results()
      sess.run(init)
      writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
      for i in range(n_epochs):
      # print(n_epochs)
      n_batches = int(10000 / batch_size)
      print("------------------Epoch /------------------".format(i, n_epochs))
      for b in range(1, n_batches+1):
      # print("In the loop")
      z_real_dist = np.random.randn(batch_size, z_dim) * 5.
      batch_x, batch_y = next_batch(batch_size)
      # print("Created the batches")
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      print("batch_x", batch_x)
      print("x_input:", x_input)
      print("x_target:", x_target)
      print("y_input:", y_input)
      sess.run(discriminator_optimizer,
      feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
      sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
      # print("setup the session")
      if b % 50 == 0:
      a_loss, d_loss, g_loss, summary = sess.run(
      [autoencoder_loss, dc_loss, generator_loss, summary_op],
      feed_dict=x_input: batch_x, x_target: batch_x,
      real_distribution: z_real_dist, y_input: batch_y)
      writer.add_summary(summary, global_step=step)
      print("Epoch: , iteration: ".format(i, b))
      print("Autoencoder Loss: ".format(a_loss))
      print("Discriminator Loss: ".format(d_loss))
      print("Generator Loss: ".format(g_loss))
      with open(log_path + '/log.txt', 'a') as log:
      log.write("Epoch: , iteration: n".format(i, b))
      log.write("Autoencoder Loss: n".format(a_loss))
      log.write("Discriminator Loss: n".format(d_loss))
      log.write("Generator Loss: n".format(g_loss))
      step += 1

      saver.save(sess, save_path=saved_model_path, global_step=step)
      else:
      # Get the latest results folder
      all_results = os.listdir(results_path)
      all_results.sort()
      saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
      all_results[-1] + '/Saved_models/'))
      generate_image_grid(sess, op=decoder_image)


      if __name__ == '__main__':
      parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
      parser.add_argument('--train', '-t', type=bool, default=True,
      help='Set to True to train a new model, False to load weights and display image grid')
      args = parser.parse_args()
      train(train_model=args.train)


      Getting this error message:




      Traceback (most recent call last):
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
      train(train_model=args.train)
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
      run_metadata_ptr)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
      np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
      return array(a, dtype, copy=False, order=order)



      ValueError: setting an array element with a sequence.



      Process finished with exit code 1




      I really don't get this error. Can somebody help me out with this?









      share







      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Trying to run this code, which is a function I wrote myself:



      def next_batch(batch_size):
      label = [0, 1, 0, 0, 0]
      X = []
      Y = []
      for i in range(0, batch_size):
      rand = random.choice(os.listdir(mnist))
      rand = mnist + rand
      img = cv2.imread(str(rand), 0)
      img = np.array(img)
      img = img.ravel()
      X.append(img)
      Y.append(label)
      X = np.array(X)
      Y = np.array(Y)
      return X, Y


      Then I want to use the X and Y array for training purpose of my network.
      I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



      def train(train_model=True):
      """
      Used to train the autoencoder by passing in the necessary inputs.
      :param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
      :return: does not return anything
      """
      with tf.variable_scope(tf.get_variable_scope()):
      encoder_output = encoder(x_input)
      # Concat class label and the encoder output
      decoder_input = tf.concat([y_input, encoder_output], 1)
      decoder_output = decoder(decoder_input)

      with tf.variable_scope(tf.get_variable_scope()):
      d_real = discriminator(real_distribution)
      d_fake = discriminator(encoder_output, reuse=True)

      with tf.variable_scope(tf.get_variable_scope()):
      decoder_image = decoder(manual_decoder_input, reuse=True)

      # Autoencoder loss
      autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

      # Discriminator Loss
      dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
      dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
      dc_loss = dc_loss_fake + dc_loss_real

      # Generator loss
      generator_loss = tf.reduce_mean(
      tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

      all_variables = tf.trainable_variables()
      dc_var = [var for var in all_variables if 'dc_' in var.name]
      en_var = [var for var in all_variables if 'e_' in var.name]

      # Optimizers
      autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(autoencoder_loss)
      discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(dc_loss, var_list=dc_var)
      generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(generator_loss, var_list=en_var)

      init = tf.global_variables_initializer()

      # Reshape images to display them
      input_images = tf.reshape(x_input, [-1, 368, 432, 1])
      generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

      # Tensorboard visualization
      tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
      tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
      tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
      tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
      tf.summary.histogram(name='Real Distribution', values=real_distribution)
      tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
      tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
      summary_op = tf.summary.merge_all()

      # Saving the model
      saver = tf.train.Saver()
      step = 0
      with tf.Session() as sess:
      if train_model:
      tensorboard_path, saved_model_path, log_path = form_results()
      sess.run(init)
      writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
      for i in range(n_epochs):
      # print(n_epochs)
      n_batches = int(10000 / batch_size)
      print("------------------Epoch /------------------".format(i, n_epochs))
      for b in range(1, n_batches+1):
      # print("In the loop")
      z_real_dist = np.random.randn(batch_size, z_dim) * 5.
      batch_x, batch_y = next_batch(batch_size)
      # print("Created the batches")
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      print("batch_x", batch_x)
      print("x_input:", x_input)
      print("x_target:", x_target)
      print("y_input:", y_input)
      sess.run(discriminator_optimizer,
      feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
      sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
      # print("setup the session")
      if b % 50 == 0:
      a_loss, d_loss, g_loss, summary = sess.run(
      [autoencoder_loss, dc_loss, generator_loss, summary_op],
      feed_dict=x_input: batch_x, x_target: batch_x,
      real_distribution: z_real_dist, y_input: batch_y)
      writer.add_summary(summary, global_step=step)
      print("Epoch: , iteration: ".format(i, b))
      print("Autoencoder Loss: ".format(a_loss))
      print("Discriminator Loss: ".format(d_loss))
      print("Generator Loss: ".format(g_loss))
      with open(log_path + '/log.txt', 'a') as log:
      log.write("Epoch: , iteration: n".format(i, b))
      log.write("Autoencoder Loss: n".format(a_loss))
      log.write("Discriminator Loss: n".format(d_loss))
      log.write("Generator Loss: n".format(g_loss))
      step += 1

      saver.save(sess, save_path=saved_model_path, global_step=step)
      else:
      # Get the latest results folder
      all_results = os.listdir(results_path)
      all_results.sort()
      saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
      all_results[-1] + '/Saved_models/'))
      generate_image_grid(sess, op=decoder_image)


      if __name__ == '__main__':
      parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
      parser.add_argument('--train', '-t', type=bool, default=True,
      help='Set to True to train a new model, False to load weights and display image grid')
      args = parser.parse_args()
      train(train_model=args.train)


      Getting this error message:




      Traceback (most recent call last):
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
      train(train_model=args.train)
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
      run_metadata_ptr)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
      np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
      return array(a, dtype, copy=False, order=order)



      ValueError: setting an array element with a sequence.



      Process finished with exit code 1




      I really don't get this error. Can somebody help me out with this?







      python python-3.x numpy neural-network tensorflow





      share







      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share







      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share



      share






      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 4 mins ago









      FreddyGumpFreddyGump

      11




      11




      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "196"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f215906%2fvalueerror-setting-an-array-element-with-a-sequence-tensorflow-and-numpy%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.












          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.











          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Code Review Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f215906%2fvalueerror-setting-an-array-element-with-a-sequence-tensorflow-and-numpy%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          名間水力發電廠 目录 沿革 設施 鄰近設施 註釋 外部連結 导航菜单23°50′10″N 120°42′41″E / 23.83611°N 120.71139°E / 23.83611; 120.7113923°50′10″N 120°42′41″E / 23.83611°N 120.71139°E / 23.83611; 120.71139計畫概要原始内容臺灣第一座BOT 模式開發的水力發電廠-名間水力電廠名間水力發電廠 水利署首件BOT案原始内容《小檔案》名間電廠 首座BOT水力發電廠原始内容名間電廠BOT - 經濟部水利署中區水資源局

          Prove that NP is closed under karp reduction?Space(n) not closed under Karp reductions - what about NTime(n)?Class P is closed under rotation?Prove or disprove that $NL$ is closed under polynomial many-one reductions$mathbfNC_2$ is closed under log-space reductionOn Karp reductionwhen can I know if a class (complexity) is closed under reduction (cook/karp)Check if class $PSPACE$ is closed under polyonomially space reductionIs NPSPACE also closed under polynomial-time reduction and under log-space reduction?Prove PSPACE is closed under complement?Prove PSPACE is closed under union?

          Is my guitar’s action too high? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Strings too stiff on a recently purchased acoustic guitar | Cort AD880CEIs the action of my guitar really high?Μy little finger is too weak to play guitarWith guitar, how long should I give my fingers to strengthen / callous?When playing a fret the guitar sounds mutedPlaying (Barre) chords up the guitar neckI think my guitar strings are wound too tight and I can't play barre chordsF barre chord on an SG guitarHow to find to the right strings of a barre chord by feel?High action on higher fret on my steel acoustic guitar