๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ

DataScience/Machine Learning Basic

Tensorflow Linear Regression Implementation

Linear Regression์˜ ์›๋ฆฌ์— ๋Œ€ํ•ด ๊ฐ€๋ณ๊ฒŒ ์•Œ์•„๋ดค์œผ๋‹ˆ, tensorflow๋ฅผ ํ†ตํ•ด ๊ฐ„๋‹จํ•œ Linear Regression์„ ๊ตฌํ˜„ํ•ด๋ณด๊ณ ์ž ํ•œ๋‹ค. 

์ง€๋‚œ ๋ฒˆ์— ์‚ฌ์šฉํ–ˆ๋˜

๊ฐ’์„ ์‚ฌ์šฉํ•˜๊ณ ์ž ํ•œ๋‹ค.

import tensorflow as tf

x_train = [1,2,3] #ํ•™์Šตํ•  x๊ฐ’
y_train = [1,2,3] #ํ•™์Šตํ•  y๊ฐ’

#Tensorflow ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š” Variable : Tensorflow๊ฐ€ ์ž์ฒด์ ์œผ๋กœ ๋ณ€ํ™”์‹œํ‚ค๋Š” ๋ณ€์ˆ˜ _ ํ•™์Šตํ•˜๋Š” ๊ณผ์ •์—์„œ ๋ณ€๊ฒฝ์‹œํ‚ด
W = tf.Variable(tf.random.normal([1]), name = 'weight') # Weight
#๊ฐ’์ด ํ•˜๋‚˜์ธ array๋ฅผ ์ฃผ๊ฒŒ ๋จ
b = tf.Variable(tf.random.normal([1]), name = 'bias') #Bias

์šฐ์„  x๊ฐ’๊ณผ y ๊ฐ’์ด ์žˆ๋Š” train data๋ฅผ ์ž…๋ ฅํ•ด์ฃผ๊ณ , ๊ฐ€์ค‘์น˜(Weight)๊ฐ€ ๋  W์™€ ํŽธํ–ฅ์น˜(Bias)๊ฐ€ ๋  b๊ฐ’์„ ๋žœ๋ค์œผ๋กœ ์„ค์ •ํ•ด์ค€๋‹ค. (์–ด์ฐจํ”ผ W,b ๊ฐ’์€ ํ•™์Šต์„ ๋ฐ˜๋ณตํ•˜๋ฉฐ ์ตœ์ ํ™”๋œ ์ˆซ์ž๋กœ ์ ํ•ฉํ•ด๊ฐˆ ๊ฒƒ์ด๋‹ค.)

 

๊ทธ ํ›„ , 

# Only in Tensorflow 1.0
hypothesis = x_train * W + b

cost = tf.reduce_mean(tf.square(hypothesis - y_train))

optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.01) 
train = optimizer.minimize(cost)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

for step in range(2001):
  sess.run(train)
  if step % 20 == 0:
    print(step, sess.run(cost), sess.run(W), sess.rub(b))

์ด ์ฝ”๋“œ๋ฅผ ๋ฐ”๋กœ ์‹คํ–‰ํ•˜๊ฒŒ ๋˜๋ฉด ๋งŽ์€ ์˜ค๋ฅ˜๊ฐ€ ์ƒ๊ธด๋‹ค.

GradientDescentOptimizer, minimze, Session ๋“ฑ์˜ ํ•จ์ˆ˜ ๊ตฌ์„ฑ์ด Tensorflow 1.0 ๋ฒ„์ „์˜ ๋ฐฉ์‹์ด๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. 

๊ตฌ๊ธ€๋ง์„ ํ•˜๋ฉด, ๋‹ค๋ฅธ ๋ฐฉ์‹ (Keras) ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ๊ตฌํ•˜๋Š” ๋ฐฉ์‹๋“ค์€ ์žˆ์—ˆ์ง€๋งŒ, cost ํ•จ์ˆ˜๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ๋ฐ˜๋ณต๋ฌธ์„ ํ†ตํ•ด W์™€ b๋ฅผ ์ง์ ‘ ์—…๋ฐ์ดํŠธ ํ•˜๋Š” ๋ฐฉ์‹๋“ค์€ ๋งŽ์ง€ ์•Š์•˜๋‹ค. 

๋‹คํ–‰ํžˆ๋„ function์„ ๊ตฌ์„ฑํ•˜์—ฌ W์™€ b ๊ฐ’์„ ๊ฐฑ์‹ ํ•˜๋Š” ์ฝ”๋“œ๊ฐ€ ์žˆ์–ด์„œ ์ฐธ๊ณ ํ•˜์—ฌ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•ด๋ณด์•˜๋‹ค.

 

#Tensorflow 2.0
## Hypothesis ์„ค์ •
@tf.function 
def hypothesis(x):
  return W*x + b

## Cost Function ๊ณ„์‚ฐ
@tf.function
def cost(pred, y_train):
  return tf.reduce_mean(tf.square(pred - y_train))

## ์ตœ์ ํ™”
optimizer = tf.optimizers.SGD(learning_rate = 0.01)

## ์ตœ์ ํ™” ๊ฐฑ์‹  ๊ณผ์ •
@tf.function
def optimization():
    with tf.GradientTape() as g:
      pred = hypothesis(x_train)
      loss = cost(pred, y_train)

    gradients = g.gradient(loss, [W,b])
    optimizer.apply_gradients(zip(gradients, [W,b]))

## ์ตœ์ ํ™” ๋ฐ˜๋ณต
for step in range(1, 2001):
  optimization()
  
  #20 ํšŒ๋งˆ๋‹ค ๊ฐฑ์‹  ๊ฐ’ ํ”„๋ฆฐํŠธ
  if step % 20 == 0:
    pred = hypothesis(x_train)
    loss = cost(pred, y_train)
    tf.print(step,loss, W, b)

20~ 100๋ฒˆ ๋ฐ˜๋ณตํ–ˆ์„ ๋•Œ์˜ step, cost, W, b ๊ฐ’๊ณผ 1900~2000 ๋ฒˆ ๋ฐ˜๋ณตํ–ˆ์„ ๋•Œ์˜ ๊ฐ’

ํšŒ์ฐจ๋ฅผ ๋ฐ˜๋ณตํ•  ์ˆ˜๋ก, Cost์˜ ๊ฐ’์ด ๋งค์šฐ ์ž‘์€ ๊ฐ’์œผ๋กœ ์ˆ˜๋ ดํ•˜์˜€๊ณ , W ๊ฐ’์€ 1๋กœ, b๊ฐ’์€ 0์œผ๋กœ ์ˆ˜๋ ดํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. 

์ฆ‰, H(x) = 1 x + 0 ์ด๋ผ๋Š” ์‹์— ๊ฐ€๊นŒ์›Œ์ง€๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. 

'DataScience > Machine Learning Basic' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

Multivariable linear regression  (0) 2022.02.21
Linear Regression Cost Function & Gradient Descent Algorithm  (0) 2022.02.20
Linear Regression  (0) 2022.01.25
Tensorflow ๊ธฐ๋ณธ Operation  (0) 2022.01.21
Tensorflow in Pycharm ๊ทธ๋ฆฌ๊ณ  Google Colab  (0) 2022.01.21