foterelli
Alex   Ukraine
 
 
i could tell you but then i'd have to kill you.
Currently Offline
Recent Activity
1,770 hrs on record
last played on 31 Jan
61 hrs on record
last played on 2 Jun, 2024
157 hrs on record
last played on 8 Mar, 2024
Jhan 6 Jun, 2018 @ 2:17pm 
__________________________######_________
___________________________######_________
____________________________####__________
_____________________________##___________
___________________________######_________
__________________________#######_________
__####__________________#########_________
_######________________###_######_________
_######_______________###__######_________
__####_______________###___ ######_________
_____##################____ ######_________
_____##################+REP######________
______#################____######_________
_______###_______#####_____######_________
______###_______#####______######_________
_____###________#####______######_________
#######_________##########_#############
Dennis 16 Apr, 2018 @ 2:25pm 
hayatımda gördüğüm en boş yapılmış yorum bölümü
qqqqq 7 Apr, 2018 @ 9:37am 
BENA OYNU ÖRET NOLUR
Dennis 28 Feb, 2018 @ 8:09am 
e = len(names)
train_X = np.array(names[:size * 2/3])
train_y = np.array(indStates[:size * 2/3])
test_X = np.array(names[size * 2/3:])
test_y = np.array(indStates[size * 2/3:])
X = tf.placeholder(tf.float32, [None, max_sequence_length, num_input])
y = tf.placeholder(tf.float32, [None, num_classes])
weights = weight_variable([num_hidden, num_classes])
biases = bias_variable([num_classes])
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_hidden)
out, states = tf.nn.dynamic_rnn(rnn_cell, X, dtype = tf.float32)
y_ = tf.matmul(outputs[:,-1,:], weights) + biases
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = y_, labels = y))
train_step = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(loss)
Dennis 20 Feb, 2018 @ 2:26pm 
ne cs mi abov fış aney
ChocoPox93 7 Feb, 2018 @ 11:06am 
Do u kno da wae