5 STAR AI.IO

TOOLS

FOR YOUR BUSINESS

HELLO & WELCOME TO THE

5 STAR AI.IO

TOOLS

FOR YOUR BUSINESS

Build A Chatbot 

Generate Your First Professional Build A Chatbot AI PROJECT & Get Your BUSINESS 2 Another Level. 


ChatGPT API - Build A Chatbot App


ChatGPT API - Build A Chatbot App



Transcript

part -1

0:00

openai recently released the API to the

0:02

most advanced language model GPT 3.5

0:04

turbo this is the model that powers Chad

0:06

GPT and is 10 times cheaper than its

0:08

predecessor gbt3 it can be used for both

0:11

conversational chat and your standard

0:12

gbt3 like completion that we're all

0:14

familiar with let's learn how to build

0:15

this conversational chatbot app using

0:17

that new model along with spell kit and

0:19

Verso Edge functions we'll start out

0:21

with the openai playground to get a

0:22

better look at how this new API actually

0:24

works before we start to integrate it

0:25

into an app so on the left here we can

0:27

see that we have a system input this is

0:29

where we can prime the model by giving

0:30

it an identity or some context that it

0:32

might need from the start for example we

0:34

can say something like this we could say

0:35

you're an enthusiastic and witty

0:36

customer support agent working for

0:38

Hunter byte your name is Axel and you're

0:39

happy to help and the AI is going to use

0:41

that information when responding to the

0:43

user so if I enter a message into the

0:44

user prompt and say who are you and then

0:46

click on submit

0:48

it'll give us a response that we might

0:50

expect right so we can continue the

0:51

conversation by informing the insistent

0:53

that we're trying to learn how to code

0:54

so if we click on submit here it's going

0:56

to ask us what language we're interested

0:58

in learning and then we can just say

0:59

python

1:02

it's then going to retain the context

1:03

and render out a nice list of resources

1:05

to Learn Python or how to install it and

1:07

practice now it's actually happening

1:09

here though is that we're setting the

1:10

entire chat history with each request it

1:12

then reads that history in order to

1:14

determine the current context of the

1:15

chat before it actually generates a

1:17

response so if we were to remove some of

1:19

these last few messages here let's just

1:20

say let's remove this this this and then

1:23

we say something like what resources did

1:25

you provide me with let me click submit

1:26

it's going to tell us that I have not

1:28

provided you with any resources yet so

1:30

it doesn't actually have that context

1:31

sitting somewhere on a server we have to

1:32

pass it with each request the entire

1:34

history of the chat so it knows what we

1:36

were talking about before and what

1:37

information it's already given us so

1:39

that's something important to keep in

1:40

mind so now let's go ahead and get into

1:41

the code as always the starting and

1:43

final code can be found in the

1:44

description below and we'll be starting

1:46

out with the Styles and markup for our

1:48

main app page here as well as a chat

1:50

message component which changes its

1:52

appearance based on if it's a message

1:53

from the user or if it's from the AI

1:55

nothing too complex going on here these

1:57

are all days UI Tailwind components I've

1:59

also went ahead and added my openai API

2:01

key to the EMV file as openai underscore

2:04

key don't worry this key will be deleted

2:06

before this video is published but you

2:07

can try to use it if you want anyways

2:08

now let's start out by installing the

2:10

openai node SDK and then we can set up

2:13

an endpoint at API chat with a plus

2:16

server.ts file inside of it and this is

2:18

actually going to be a post request

2:19

Handler so we can change this to post

2:21

we'll bring in our types here and then

2:22

we also need to destructure the request

2:24

from the request event that gets passed

2:26

into this request Handler so now let's

2:28

take a look at the openai API reference

2:30

documentation so we can actually get an

2:32

understanding of how we are going to

2:33

create these chat completions so we can

2:35

see here that the request body requires

2:38

a model and messages so if we look here

2:40

at the example they have messages which

2:42

is an array of messages we have roles

2:44

and content now the three roles that we

2:46

have access to are assistant which is

2:48

the AI we have user which is our user

2:51

and then we have system which is that

2:52

message we set it up with from the

2:54

beginning right and then if we look at

2:55

the response here it is returning a

2:57

message with a role of assistant which

2:59

is the AI generated response as well as

3:01

some content now something you'll notice

3:02

here is that it's not actually returning

3:04

all of our messages back with that

3:06

response it's only going to return the

3:08

response to that latest question

3:09

therefore we actually have to keep track

3:11

of the messages ourselves so that on the

3:13

next request we can send it back and it

3:15

has that full context like we discussed

3:16

a few minutes ago now since we will be

3:18

streaming this data in it's important to

3:19

know that the stream is terminated by a

3:22

data done message which is going to be a

3:23

part of that server sent event setup

3:25

that we're going to do in a few minutes

3:26

so what's going to happen is our client

3:28

side is going to send an array of

3:31

messages via the request body to this

3:33

endpoint here we're then going to take

3:35

those messages send them to the openai

3:37

API to get a response and then stream

3:39

that response back to the client so the

3:42

first thing I'm going to do is set up a

3:43

try catch block and then let's just make

3:45

sure that our open AI key does in fact

3:46

exist the one that we set inside of our

3:48

environment variables if it doesn't

3:49

we're just going to throw a new error

3:50

then we can grab the request Body by

3:52

assigning request data to

3:54

awaitrequest.json and what this request

3:56

data is going to look like it's going to

3:58

be an object of course it's going to

3:59

have messages with an array of messages

4:01

okay and that's what we're going to send

4:02

from our client side we'll be setting

4:04

that up here in just a few minutes so if

4:06

request data does not exist or if it's

4:08

falsy we'll just throw a new error and

4:10

then we'll set request messages which is

4:12

going to be of type chat completion

4:14

request message which comes from openai

4:16

and it's going to be an array it's going

4:18

to be equal to requestdata dot messages

4:20

and then if we're to take a look at this

4:22

type here we can see that we get a role

4:24

which is going to be one of either

4:25

system user or assistant we're also

4:28

going to get content which is the

4:29

contents of the message and then name

4:30

here which is the name of the user in a

4:32

multi-user chat we're actually not going

4:34

to be taking advantage of that but that

4:35

is here if you want to explore so if you

4:37

recall from earlier we said that the

4:38

entire context or the entire chat

4:40

history is sent with every request right

4:42

well openai still has a 4096 token limit

4:46

on a single request right so each one of

4:48

those chat contents count as tokens that

4:50

it has to process so we don't want our

4:52

token count to exceed 4096. so how can

4:55

we do that how can we prevent the

4:56

messages from exceeding that what we

4:58

need to take advantage of is a tokenizer

4:59

and we can install one called GPT three

5:02

dash tokenizer and it's going to be a

5:03

library so we'll say gpt3 Dash tokenizer

5:06

and there are other libraries out there

5:08

I just found that this one works well

5:09

for me and then what we'll do is we'll

5:11

actually set up a new file inside of our

5:12

lib directory and we'll call this

5:14

tokenizer.ts now the way we actually

5:16

have to set this up is a little bit

5:17

weird I found this workaround through

5:18

the GitHub issues I'm not sure if this

5:20

is a spell kit problem or a gbt3

5:23

tokenizer problem but it's kind of weird

5:24

so the first thing we'll do is import

5:26

gpt3 tokenizer import from gbt3-tok

5:30

tokenizer this is the default export and

5:32

then we'll set a gpt3 tokenizer variable

5:35

and it's going to be a type of gpt3

5:37

tokenizer import which equals we're

5:40

going to say if the type of gpt3

5:42

tokenizer import is equal to a function

5:45

so if it's a type of function then it's

5:47

going to be equal to a type of gp3

5:48

tokenizer import otherwise we'll say

5:51

gpt3 tokenizer import as any dot default

5:54

again I know this looks weird but this

5:55

is the way that I was able to get this

5:56

to work if you know of a better way

5:57

please let me know in the comments down

5:58

below so the next thing we'll do is set

6:00

up a tokenizer and it's going to be a

6:02

new gpt3 tokenizer and it's gonna have

6:04

type gpt3 and then we'll export a

6:05

function which is called called get

6:07

tokens it's going to take in an input

6:09

which will be of type string and it's

6:10

going to return a number so the number

6:12

of tokens so we can get the tokens by

6:14

using the tokenizer dot encode method

6:16

and we'll pass in our input and then

6:17

we'll just return the tokens dot

6:19

text.link which is going to be the total

6:21

number of tokens that this given string

6:23

contains so now back in our endpoint

6:24

here first thing we'll do is check to

6:26

make sure we do in fact have request

6:28

messages if not we're going to throw a

6:29

new error and then what we're going to

6:30

do is we're going to set up a token

6:32

count so we'll say let token count and

6:33

it's gonna be set to zero so then for

6:35

each request message we're going to get

6:36

the token count of that message and then

6:39

add it to this token count variable here

6:41

so we'll have a total number of tokens

6:42

right and we'll use this here in just a

6:43

few so we'll say requestmestages dot for

6:45

each we'll get each message and if you

6:47

recall the type of these messages it has

6:49

the doc content it's also going to have

6:51

dot roll so what we care about is we

6:53

care about getting the tokens from the

6:55

message.content and then we'll add that

6:58

to token count I won't actually be using

7:00

this until a bit later but it's good to

7:01

go ahead and get it out of the way now

7:02

so now we're going to do is we're

7:03

actually going to hit open ai's

7:04

moderation endpoint and it's basically

7:06

to prevent us from getting banned from

7:08

using their apis if our users pass in

7:10

some crazy stuff so it's essentially an

7:12

endpoint that gives you back some

7:13

results and if those results are flagged

7:15

we can then throw an error and not let

7:16

it continue on to actually submit the

7:18

message to the actual chat GPT API

7:20

endpoint and if we look at the API

7:22

reference and we scroll down here to

7:24

moderations we can see that we pass in

7:26

an input and then it gives us some

7:27

results and then if something's flagged

7:29

it'll be true if not it will be false so

7:31

let's just set that up now so we'll say

7:33

moderation response is going to be a

7:35

fetch request to API to openai.com V1

7:38

moderations we're going to pass it some

7:40

headers one is going to be the content

7:42

type which will be application slash

7:43

Json the other is going to be our

7:45

authorization header which is going to

7:46

be a bear and we're going to pass in the

7:48

open AI key that we have from our

7:50

private environment variables it's going

7:51

to be a method of post and then we're

7:53

going to have a body which we are going

7:54

to set json.stringify and the input is

7:57

going to be Rec messages and then we'll

7:59

say recessages dot length minus one

8:02

which will give us the very last message

8:03

here right because the other messages we

8:05

should have already vetted now if they

8:06

do some crazy stuff there we're not

8:08

going to account for that in this video

8:09

but of course we're gonna get the most

8:10

recent message they sent which will be

8:12

the last message in the request messages

8:13

array and then we're going to get the

8:14

content from that and pass that to our

8:16

moderation endpoint and then we can get

8:17

that response with moderation data

8:20

moderationres.json and then if we look

8:22

at the response here it's a results

8:23

property which has an array of objects

8:26

so what we'll do is we'll just say we

8:28

want the first of that array from

8:30

moderationdata.results and then

8:32

ifresults.flagged then we're going to

8:33

throw a new error so if all that's good

8:35

let's go ahead and Define our prompt so

8:37

our prompt is going to be you are a

8:40

virtual assistant for a company called

8:41

huntabyte your name is Axel Smith and

8:43

then we're going to add this prompts

8:44

tokens to our token count so we can just

8:47

say token count plus equals get tokens

8:49

prompt and then we'll say if token count

8:52

is greater than or equal to four

8:53

thousand you could do a number of things

8:55

here one thing that I would say you

8:56

should probably look into is potentially

8:57

just ripping out that first message out

8:59

of that request messages array as long

9:01

as there is at least two messages in

9:03

that array right so you can remove back

9:04

the older messages so starting with

9:06

index 0 and and kind of work your way

9:07

down and delete some messages until your

9:09

token count is less than four thousand

9:11

or you can just throw an error here

9:12

which we could then on the client side

9:13

just kind of reset the messages object

9:15

there's a few different ways to do it

9:16

for this simple example here I'm just

9:17

going to throw an error but of course

9:19

definitely explore different ideas and

9:21

ways to handle this more smoothly and

9:22

then what we're going to do is we're

9:23

going to construct a messages array that

9:25

we're going to pass to our chat

9:26

completion endpoint so we can say

9:28

messages it's going to be a type chat

9:29

completion request message it's gonna be

9:31

an array and it's going to have a

9:33

starting message so that system message

9:35

right so I have a roll of system and the

9:37

content is going to be the prompt that

9:38

we created up here and then we will just

9:40

spread the rest of the request messages

9:42

here and then we can set up our chat

9:44

request options so let's just set up a

9:46

new video called chat request Ops and

9:48

this can be of type create chat

9:49

completion request it's going to be an

9:51

object it's going to have a model which

9:52

is going to be GPT 3.5 turbo it's also

9:55

going to have messages that we just

9:56

created right here it's gonna have a

9:57

temperature we'll say 0.9 keep it frisky

10:00

here and then we'll have a stream which

10:01

is going to be true and it's saying that

10:02

we can't use the namespace we actually

10:03

need to import this as a type so it's

10:05

just set set that up like so now we

10:07

should be good to go so we can actually

10:09

issue a request so we'll say chat

10:11

response is going to be a fetch request

10:13

we're going to make to the chat

10:14

completions endpoint which can be at API

10:16

openai.com V1 chat completions and we're

10:19

going to pass the typical headers here

10:21

authorization which we will need that

10:23

Bearer token for we'll have the content

10:25

type that we're submitting which is

10:26

going to be application Json the method

10:28

is going to be post and the body will be

10:31

json.stringify the chat request Ops that

10:34

we just defined up here so the next

10:35

thing we'll do is check to make sure

10:36

that this response was good to go so if

10:38

it wasn't then we will just get the

10:40

error and then we'll throw a new error

10:42

with that error message and then if

10:43

everything is good to go then we're just

10:45

going to return a new response and it's

10:47

going to be the proxied response the way

10:48

we get that stream right and this stream

10:50

is true and what this stream is doing is

10:52

telling the open AI API that we want a

10:54

streamed response back we don't want

10:55

just a regular Json response back so

10:57

then we can proxy that stream response

10:58

back to our client side through our own

11:00

endpoint like so so we can just set up

11:01

the chatresponse.body and then for the

11:03

headers we'll set the content type to

11:06

text event desk stream and then what

11:08

we'll do is we'll get rid of this

11:09

response here and then if we catch any

11:11

errors we'll just console them here and

11:13

then we'll return Json which comes from

11:15

JS kit like so okay we should have our

11:17

endpoint functioning as we would expect

11:19

and let me update this import statement

11:21

here so it doesn't look so sloppy and

11:22

yeah everything was good to go so we're

11:24

getting the request data we're checking

11:25

to make sure it exists we're then

11:26

getting the messages from that request


part - 2

11:28

data or from that request body if

11:30

there's no messages then we're going to

11:31

throw an error we're then setting the

11:32

token count and then we're going through

11:34

each one of the request messages and

11:35

then adding the amount of tokens per

11:37

message to the total token count here

11:39

then we're going to run our moderation

11:41

request where we check to make sure

11:42

their message isn't saying anything

11:43

crazy if the results are flagged we're

11:45

going to throw an error otherwise

11:46

everything is good to go and we can move

11:47

on to constructing our prompt and then

11:49

we add up our prompts tokens to our

11:51

total token count before we check to

11:52

make sure that we are not over the 4000

11:54

which I believe is actually 4096 for

11:56

this new API but I'm just going to leave

11:57

it at 4 000 and then for the messages

11:59

we're going to construct new messages

12:00

array with the first message being our

12:02

system message here that contains our

12:04

prompt and then we're passing the rest

12:05

of the request messages ever passed from

12:07

our client side here that we're defining

12:08

our options which we have the model

12:10

messages temperature and stream this is

12:12

important and then we're making our

12:13

request over to openai to get that

12:16

completion if everything is good to go

12:17

we are returning it back to our client

12:19

side with a content type of text event

12:21

Stream So now let's set up our client

12:23

side and the first thing we have to do

12:24

here is we actually need to install a

12:26

package called sse.js and if you're

12:28

using typescript it doesn't come with

12:30

type definitions so we actually just set

12:32

one of them up so we can just make a new

12:33

file inside of lib called

12:35

ssc.d.ts and I'm just going to paste

12:37

this here but you can take a look at it

12:38

it's basically all the types for this

12:40

library and this Library unlocks a

12:41

couple different capabilities that don't

12:42

come out of the box with Event Source so

12:45

it's extending the native Event Source

12:46

adding a few more options that we're

12:48

going to want for servers and events

12:49

okay so in our page dot svelt so our

12:51

homepage.spel the first thing that we're

12:52

going to do is set up a couple different

12:54

variables so we're going to have one

12:55

called query which is going to be of

12:56

type string and we'll set that default

12:58

to an empty string we'll have answer

13:00

which is also going to be a string we'll

13:01

have loading which is going to be a

13:03

Boolean it's going to be default to

13:04

false and then we'll set up a chat

13:05

message array we're going store all of

13:07

our chat messages because remember

13:08

openai's API does not keep track of our

13:10

messages for us it doesn't return them

13:12

so we have to have some type of way to

13:14

keep track of them on the local state

13:16

and since our server side is going to be

13:18

serverless or Edge functions we need to

13:20

keep that state here so we'll set up a

13:21

chat messages array which can we have

13:23

type chat completion request message

13:25

array and we'll set it to an empty array

13:27

to begin with then what we'll do is

13:28

Define a function here we'll call it

13:30

handle submit it's going to be

13:31

asynchronous and we won't do anything at

13:33

the moment but we're going to use that

13:34

to add to our on submit of our form

13:37

which will also prevent defaults we

13:38

don't have the page reload and we will

13:40

just call this like so we'll say handle

13:42

submit and then we're going to bind the

13:43

value of this input to query like so so

13:45

now whenever we submit this form it'll

13:47

call this handle submit function here

13:48

now what we're going to do is we're

13:49

actually going to keep all the chat

13:51

messages from our client side and from

13:53

the assistant inside of this chat

13:55

messages array so what we want to do

13:56

here is when you submit this request

13:58

we're going to first set loading to true

13:59

we're going to set chat messages equal

14:01

to whatever's currently in chat messages

14:03

right so we're going to add some stuff

14:04

into this later so whatever's currently

14:06

in inside of chat messages we're going

14:07

to spread out across here first and then

14:09

for the last index we're going to add

14:11

Our Own latest message so we'll say roll

14:13

is going to be of user and the content

14:15

is going to be the query so whatever was

14:16

just submitted in that form right and

14:18

then we're going to set up a new service

14:19

and events connection so we'll Define an

14:21

event source which will be a new SSC

14:23

which we need to import from ssc.js and

14:26

then for this we'll just set up to be

14:27

slash API slash chat the headers will

14:30

have a content type of application slash

14:32

Json the payload will contain those

14:35

messages like so so remember on our

14:36

server side we're expecting a message

14:38

property right here so request

14:40

data.messages to be our request messages

14:42

that's why we're doing this here then

14:43

once we point this variable to that new

14:45

SSC what we'll do is we'll clear out our

14:47

query so we'll set query to an empty

14:49

string so the next question can be

14:50

populated and then we're going to add a

14:52

couple event listeners to this event

14:53

Source the first of which We'll add is

14:55

going to be an error and we can actually

14:56

set up a function to handle errors right

14:58

so any other errors occur inside of our

15:00

application we can handle them here then

15:02

I'll just Define a new handle error

15:04

function here it's going to be a generic

15:05

we'll just take in whatever air type we

15:07

get we'll set loading to false we'll set

15:09

query to an empty string we'll also set

15:11

answer to an empty string so we're

15:12

pretty much clearing everything out and

15:14

then we'll just console the error for

15:15

now but feel free to do whatever else

15:16

you want to do such as throwing up a

15:18

toast notification or something like

15:19

that so then when we added the event

15:20

listener here to error we can just pass

15:22

the handle error function and then it's

15:24

going to pass that error event into this

15:26

it will retain that shape and work its

15:27

magic and then the next event listen

15:28

we're going to add is going to be for

15:30

messages right so message and this is

15:32

going to be the tokens as the tokens are

15:34

generated they're going to get passed

15:35

here and that's how we're going to be

15:36

able to render it across the screen as

15:37

they come in so if with each message

15:39

we're going to have a new event like so

15:40

so I'll set up a try catch here we'll

15:42

set loading equal to false because we're

15:44

no longer loading we have at least one

15:46

part of the message right and then we'll

15:47

first check to say if e.data is equal to

15:49

done like so instead of brackets so we

15:51

look at the documentation we can see

15:53

that partial message Deltas will be sent

15:54

like in chat GPT whenever stream is set

15:56

to true and if the stream is terminated

15:58

by a data done message so this is how we

16:00

know that we are done receiving tokens

16:02

from the stream whenever we get this

16:04

message here so if data is done we're

16:06

going to set chat messages equal to

16:08

whatever is currently in chat Messages

16:09

Plus the latest message we got back from

16:12

the assistant right so we can say

16:13

assistant content is going to be

16:15

answered which we're going to

16:16

pre-populate with the stream tokens as

16:17

they come in and then we'll set answer

16:19

equal to an empty string here because

16:20

we're now done and then we'll just

16:21

return and then if it's not equal to

16:23

done like this we're going to get a

16:25

completion response which is going to be

16:27

equal to json.parse e dot data and then

16:30

we're going to do is we're going to have

16:31

an object with a choices property which

16:33

is going to be an array and we want to

16:34

get the Delta from that array and then

16:36

the content on that Delta is where our

16:38

tokens live so we can set up like this

16:39

we can say Delta so we're taking the

16:41

Delta Property we're destruction the

16:43

Delta Property from the first or index 0

16:45

of this array and it's going to be

16:46

completionresponse dot choices and if

16:49

Delta dot content so if Delta dot

16:51

content so if Delta dot content exists

16:53

if it's not undefined we're going to

16:54

assign answer so if answer currently

16:56

exists we're going to use answer

16:58

otherwise we're gonna use an empty

16:59

string plus the Delta dot content so

17:01

we're basically adding those tokens on

17:03

to answer as they come in right that's

17:05

kind of how we're going to fill in that

17:06

entire chat bubble it's going to keep

17:07

coming in one by one if it's the first

17:09

message it's going to be an empty string

17:11

plus this content the next message that

17:12

comes through it's going to have

17:13

something so it's going to append it to

17:15

the end of that like Chad gbt does and

17:16

then if we catch any errors we'll just

17:17

say handle error and we'll just pass in

17:19

the error we're already consoleing it

17:20

here so we will get a console on our

17:22

client side and then outside of this

17:24

event listener but inside of this

17:25

handlesmith function we're going to call

17:27

eventsource.stream which basically tells

17:29

it to start sending messages over the

17:31

service and events and now we should

17:32

start to get some messages so if we

17:33

start up our Dev server and head to

17:35

localhost 5173 we'll see these

17:36

components that I currently have placed

17:38

here which aren't actually doing

17:39

anything they're just here for

17:40

demonstration purposes what we want to

17:42

test is the ability to receive a stream

17:43

response so a type of message here and I

17:45

say hello we're getting an error and if

17:48

we check back at the API chat I actually

17:50

put HTTP instead of https here for this

17:53

endpoint so let me just check the rest

17:54

of them really quick and we should be

17:55

now good to go so they come back here

17:57

and type in hello

17:59

we're now going to get the request

18:01

streamed in so we are in fact getting

18:02

that data back from the server let's now

18:04

render it on our page and get our chat

18:06

functional so what we'll do is we'll

18:07

come down here we'll get rid of all

18:09

these chat messages like this and we'll

18:11

set up in each block so we'll say each

18:13

chat messages as message right because

18:15

that's where we're storing all of our

18:16

messages we can say chat message type is

18:19

going to be message.roll right so we

18:21

already have access to the role and

18:22

that's why I Define the component in the

18:24

way that I did and the message is going

18:26

to be message.content now one thing we

18:28

actually have to remember is that we're

18:29

not actually adding the answer or the

18:32

streamed in answer to chat messages

18:33

until it is finished streaming so we'll

18:35

also add a check here so we'll say if

18:37

answer then we want to set up another

18:39

chat message with a type of assistant

18:41

and a message equal to answer and we can

18:44

also set up something so for loading so

18:45

we can say if loading we'll have a chat

18:47

message which is going to be type

18:48

assistant and the message is going to be

18:50

loading like so so now if we come back

18:52

into our application here let's move

18:54

this out of the way a bit and let's just

18:55

say hello

18:57

we now see hello there I'm Axel Smith

18:59

how can I help you today I want to learn

19:01

to code

19:03

type python you can see now that message

19:06

is being streamed in but it's actually

19:07

down here and we have to keep scrolling

19:09

down to get access to it so let's set up

19:11

a little bit of a helper function

19:12

that'll help us automatically scroll to

19:14

the bottom of this container whenever we

19:15

send a new message as well as when new

19:17

messages are streamed in so I'm going to

19:18

come back into the app here I'm just

19:20

going to go up to the top I'm going to

19:21

define a new function called scroll to

19:23

bottom and the reason we're setting this

19:25

up like this is to add a little bit of a

19:26

delay in there because sometimes the

19:28

HTML isn't finished rendering and it

19:29

doesn't go to the right spot so this is

19:30

the way that I found to make sure it

19:32

happens every single time so we can set

19:34

up a timeout here which is going to take

19:35

in a function and we need to assign a

19:37

div to be the scroll to div so right now

19:39

I already have one set up so at the

19:41

bottom of this container here where all

19:42

these messages are being rendered I have

19:44

this div set up here so we can do is we

19:46

can set up a new variable called scroll

19:48

to div it's gonna be a type HTML div

19:50

element and we'll just say bind this

19:53

equal to scroll to div like so and we'll

19:55

come back up to our scroll to bottom

19:57

function and we'll say scroll to div dot

19:59

scroll into view we'll set the behavior

20:01

to smooth block to end and in line to

20:04

nearest and then here inside of this

20:06

timeout still I'm going to set this to

20:08

100 and then I'm going to add this

20:09

scroll to bottom function to this event

20:11

listener for messages so we'll say

20:13

scroll to bottom we're just going to

20:14

call this like so and then we'll also

20:16

add it at the bottom here underneath of

20:18

stream this to make sure that it happens

20:20

when you first submit the request so

20:21

that we have our newly submitted message

20:23

visible and then also as the response

20:25

comes back from openai we scroll down to

20:27

make sure we can see that new message as

20:28

it's streamed in as well so now if we

20:30

save that come back in our application

20:31

say hello I'm just going to type a

20:33

couple messages here

20:36

and we can see when I submit this new

20:38

message here we get scroll down

20:40

and then as the message is streamed in

20:41

our container Scrolls down to the bottom

20:43

of the message and then one last thing

20:44

that we can do to make this look a

20:45

little bit better we can just take one

20:46

of these chat messages here and we'll

20:48

just place it at the top and we'll set

20:49

it to be the assistant and then for the

20:52

message we'll just say hello ask me

20:54

anything you want

20:56

that way when someone visits the website

20:57

they have this prompt here already set

20:59

up so they know what they're going to do

21:00

okay cool and it doesn't actually get

21:02

sent off with the rest of the requests

21:03

and all that stuff it's just there as a

21:05

visual aid all right now let's deploy

21:06

this application to versel taking

21:08

advantage of both the edge functions as

21:10

well as the serverless runtime so the

21:11

first thing we need to do is install the

21:13

svelt JS adapter for cell

21:15

and then I'm going to come into my

21:17

svelt.config.js file and we're going to

21:19

change adapter Auto to adapter versus

21:21

Cell and then within this adapter we're

21:23

going to have an object and we're going

21:24

to set the runtime by default to node.js

21:27

18.x so by default it's going to run the

21:29

serverless node runtime and then we can

21:31

actually get more specific with each

21:32

function so for example our post

21:34

function here request Handler we can

21:36

actually set this up to run on the edge

21:38

so the first thing I'll do is import

21:39

type of config which comes from the

21:42

adapter for cell and then we'll just

21:43

export a config it's going to type

21:45

config and we'll just set the runtime to

21:48

Edge so then let's just commit all this

21:50

code to GitHub and head over to the

21:52

Versa dashboard where we can deploy a

21:54

new project and then I'm going to select

21:55

that git repository when I'm deploying a

21:57

project we need to set our environment

21:58

variable so let me just grab those from

22:00

the dot EMV file using or sells

22:03

incredible copy paste we now have it in

22:04

there and then we can click on deploy

22:06

and then it will take a few seconds but

22:07

eventually we will get the

22:08

congratulations we just deployed a new

22:10

project to Purcell so we can actually go

22:11

and check it out now

22:13

and as we can see it is working as

22:15

expected

22:17

and that's going to wrap up today's

22:18

video so if you got value out of this

22:20

video don't forget to drop a like And

22:21

subscribe let me know what type of

22:22

content you all want to see next in the

22:24

comments down below thank you so much

22:25

for watching and I will see you in the

22:26

next one

22:27

[Music]

22:31

foreign

22:34

[Music]


If you find my content useful and want to support the channel, consider contributing a coffee ☕: https://hbyt.us/coffee


In this video, we cover how to build and deploy your own full-stack AI application using SvelteKit, OpenAI's newest ChatGPT API / Chat Completion Model  (gpt-3.5-turbo) APIs, and Vercel Edge Functions. The new ChatGPT API is 10x cheaper and much faster than GPT-3. We also learn how to handle moderations, token limits, and more.

Chapters:

00:00 - What we're building

00:20 - How the model works

01:40 - Starting Code

02:08 - Chat Endpoint/API Route

04:55 - Setup Tokenizer

06:24 - Finish Endpoint/API Route

12:20 - Install & Setup SSE

12:49 - Client-Side Route

21:05 - Deployment Configs

21:54 - Deploy to Vercel

Taking Your Existing Business With AI Build A Chatbot 

Create an AI Bot in Discord with the NEW ChatGPT API! 🤖



Build A Chatbot

ALL 5 STAR AI.IO PAGE STUDY

How AI and IoT are Creating An Impact On Industries Today


HELLO AND WELCOME  TO THE 


5 STAR AI.IOT TOOLS FOR YOUR BUSINESS


ARE NEW WEBSITE IS ABOUT 5 STAR AI and io’t TOOLS on the net.

We prevaid you the best

Artificial Intelligence  tools and services that can be used to create and improve BUSINESS websites AND CHANNELS .

This site is  includes tools for creating interactive visuals, animations, and videos.

 as well as tools for SEO, marketing, and web development.

 It also includes tools for creating and editing text, images, and audio. The website is intended to provide users with a comprehensive list of AI-based tools to help them create and improve their business.

https://studio.d-id.com/share?id=078f9242d5185a9494e00852e89e17f7&utm_source=copy

This website is a collection of Artificial Intelligence (AI) tools and services that can be used to create and improve websites. It includes tools for creating interactive visuals, animations, and videos, as well as tools for SEO, marketing, and web development. It also includes tools for creating and editing text, images, and audio. The website is intended to provide users with a comprehensive list of AI-based tools to help them create and improve their websites.



אתר זה הוא אוסף של כלים ושירותים של בינה מלאכותית (AI) שניתן להשתמש בהם כדי ליצור ולשפר אתרים. הוא כולל כלים ליצירת ויזואליה אינטראקטיבית, אנימציות וסרטונים, כמו גם כלים לקידום אתרים, שיווק ופיתוח אתרים. הוא כולל גם כלים ליצירה ועריכה של טקסט, תמונות ואודיו. האתר נועד לספק למשתמשים רשימה מקיפה של כלים מבוססי AI שיסייעו להם ליצור ולשפר את אתרי האינטרנט שלהם.

Hello and welcome to our new site that shares with you the most powerful web platforms and tools available on the web today

All platforms, websites and tools have artificial intelligence AI and have a 5-star rating

All platforms, websites and tools are free and Pro paid

The platforms, websites and the tool's  are the best  for growing your business in 2022/3

שלום וברוכים הבאים לאתר החדש שלנו המשתף אתכם בפלטפורמות האינטרנט והכלים החזקים ביותר הקיימים היום ברשת. כל הפלטפורמות, האתרים והכלים הם בעלי בינה מלאכותית AI ובעלי דירוג של 5 כוכבים. כל הפלטפורמות, האתרים והכלים חינמיים ומקצועיים בתשלום הפלטפורמות, האתרים והכלים באתר זה הם הטובים ביותר  והמועילים ביותר להצמחת ולהגדלת העסק שלך ב-2022/3 

A Guide for AI-Enhancing Your Existing Business Application


A guide to improving your existing business application of artificial intelligence

מדריך לשיפור היישום העסקי הקיים שלך בינה מלאכותית

What is Artificial Intelligence and how does it work? What are the 3 types of AI?

What is Artificial Intelligence and how does it work? What are the 3 types of AI? The 3 types of AI are: General AI: AI that can perform all of the intellectual tasks a human can. Currently, no form of AI can think abstractly or develop creative ideas in the same ways as humans.  Narrow AI: Narrow AI commonly includes visual recognition and natural language processing (NLP) technologies. It is a powerful tool for completing routine jobs based on common knowledge, such as playing music on demand via a voice-enabled device.  Broad AI: Broad AI typically relies on exclusive data sets associated with the business in question. It is generally considered the most useful AI category for a business. Business leaders will integrate a broad AI solution with a specific business process where enterprise-specific knowledge is required.  How can artificial intelligence be used in business? AI is providing new ways for humans to engage with machines, transitioning personnel from pure digital experiences to human-like natural interactions. This is called cognitive engagement.  AI is augmenting and improving how humans absorb and process information, often in real-time. This is called cognitive insights and knowledge management. Beyond process automation, AI is facilitating knowledge-intensive business decisions, mimicking complex human intelligence. This is called cognitive automation.  What are the different artificial intelligence technologies in business? Machine learning, deep learning, robotics, computer vision, cognitive computing, artificial general intelligence, natural language processing, and knowledge reasoning are some of the most common business applications of AI.  What is the difference between artificial intelligence and machine learning and deep learning? Artificial intelligence (AI) applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions.  Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.  Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks capable of learning unsupervised from data that is unstructured or unlabeled.  What are the current and future capabilities of artificial intelligence? Current capabilities of AI include examples such as personal assistants (Siri, Alexa, Google Home), smart cars (Tesla), behavioral adaptation to improve the emotional intelligence of customer support representatives, using machine learning and predictive algorithms to improve the customer’s experience, transactional AI like that of Amazon, personalized content recommendations (Netflix), voice control, and learning thermostats.  Future capabilities of AI might probably include fully autonomous cars, precision farming, future air traffic controllers, future classrooms with ambient informatics, urban systems, smart cities and so on.  To know more about the scope of artificial intelligence in your business, please connect with our expert.

מהי בינה מלאכותית וכיצד היא פועלת? מהם 3 סוגי הבינה המלאכותית?

מהי בינה מלאכותית וכיצד היא פועלת? מהם 3 סוגי הבינה המלאכותית? שלושת סוגי הבינה המלאכותית הם: בינה מלאכותית כללית: בינה מלאכותית שיכולה לבצע את כל המשימות האינטלקטואליות שאדם יכול. נכון לעכשיו, שום צורה של AI לא יכולה לחשוב בצורה מופשטת או לפתח רעיונות יצירתיים באותן דרכים כמו בני אדם. בינה מלאכותית צרה: בינה מלאכותית צרה כוללת בדרך כלל טכנולוגיות זיהוי חזותי ועיבוד שפה טבעית (NLP). זהו כלי רב עוצמה להשלמת עבודות שגרתיות המבוססות על ידע נפוץ, כגון השמעת מוזיקה לפי דרישה באמצעות מכשיר התומך בקול. בינה מלאכותית רחבה: בינה מלאכותית רחבה מסתמכת בדרך כלל על מערכי נתונים בלעדיים הקשורים לעסק המדובר. זה נחשב בדרך כלל לקטגוריית הבינה המלאכותית השימושית ביותר עבור עסק. מנהיגים עסקיים ישלבו פתרון AI רחב עם תהליך עסקי ספציפי שבו נדרש ידע ספציפי לארגון. כיצד ניתן להשתמש בבינה מלאכותית בעסק? AI מספקת דרכים חדשות לבני אדם לעסוק במכונות, ומעבירה את הצוות מחוויות דיגיטליות טהורות לאינטראקציות טבעיות דמויות אדם. זה נקרא מעורבות קוגניטיבית. בינה מלאכותית מגדילה ומשפרת את האופן שבו בני אדם קולטים ומעבדים מידע, לעתים קרובות בזמן אמת. זה נקרא תובנות קוגניטיביות וניהול ידע. מעבר לאוטומציה של תהליכים, AI מאפשר החלטות עסקיות עתירות ידע, תוך חיקוי אינטליגנציה אנושית מורכבת. זה נקרא אוטומציה קוגניטיבית. מהן טכנולוגיות הבינה המלאכותית השונות בעסק? למידת מכונה, למידה עמוקה, רובוטיקה, ראייה ממוחשבת, מחשוב קוגניטיבי, בינה כללית מלאכותית, עיבוד שפה טבעית וחשיבת ידע הם חלק מהיישומים העסקיים הנפוצים ביותר של AI. מה ההבדל בין בינה מלאכותית ולמידת מכונה ולמידה עמוקה? בינה מלאכותית (AI) מיישמת ניתוח מתקדמות וטכניקות מבוססות לוגיקה, כולל למידת מכונה, כדי לפרש אירועים, לתמוך ולהפוך החלטות לאוטומטיות ולנקוט פעולות. למידת מכונה היא יישום של בינה מלאכותית (AI) המספק למערכות את היכולת ללמוד ולהשתפר מניסיון באופן אוטומטי מבלי להיות מתוכנתים במפורש. למידה עמוקה היא תת-קבוצה של למידת מכונה בבינה מלאכותית (AI) שיש לה רשתות המסוגלות ללמוד ללא פיקוח מנתונים שאינם מובנים או ללא תווית. מהן היכולות הנוכחיות והעתידיות של בינה מלאכותית? היכולות הנוכחיות של AI כוללות דוגמאות כמו עוזרים אישיים (Siri, Alexa, Google Home), מכוניות חכמות (Tesla), התאמה התנהגותית לשיפור האינטליגנציה הרגשית של נציגי תמיכת לקוחות, שימוש בלמידת מכונה ואלגוריתמים חזויים כדי לשפר את חווית הלקוח, עסקאות בינה מלאכותית כמו זו של אמזון, המלצות תוכן מותאמות אישית (Netflix), שליטה קולית ותרמוסטטים ללמידה. יכולות עתידיות של AI עשויות לכלול כנראה מכוניות אוטונומיות מלאות, חקלאות מדויקת, בקרי תעבורה אוויריים עתידיים, כיתות עתידיות עם אינפורמטיקה סביבתית, מערכות עירוניות, ערים חכמות וכן הלאה. כדי לדעת יותר על היקף הבינה המלאכותית בעסק שלך, אנא צור קשר עם המומחה שלנו.

Glossary of Terms


Application Programming Interface(API):

An API, or application programming interface, is a set of rules and protocols that allows different software programs to communicate and exchange information with each other. It acts as a kind of intermediary, enabling different programs to interact and work together, even if they are not built using the same programming languages or technologies. API's provide a way for different software programs to talk to each other and share data, helping to create a more interconnected and seamless user experience.

Artificial Intelligence(AI):

the intelligence displayed by machines in performing tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and language understanding. AI is achieved by developing algorithms and systems that can process, analyze, and understand large amounts of data and make decisions based on that data.

Compute Unified Device Architecture(CUDA):

CUDA is a way that computers can work on really hard and big problems by breaking them down into smaller pieces and solving them all at the same time. It helps the computer work faster and better by using special parts inside it called GPUs. It's like when you have lots of friends help you do a puzzle - it goes much faster than if you try to do it all by yourself.

The term "CUDA" is a trademark of NVIDIA Corporation, which developed and popularized the technology.

Data Processing:

The process of preparing raw data for use in a machine learning model, including tasks such as cleaning, transforming, and normalizing the data.

Deep Learning(DL):

A subfield of machine learning that uses deep neural networks with many layers to learn complex patterns from data.

Feature Engineering:

The process of selecting and creating new features from the raw data that can be used to improve the performance of a machine learning model.

Freemium:

You might see the term "Freemium" used often on this site. It simply means that the specific tool that you're looking at has both free and paid options. Typically there is very minimal, but unlimited, usage of the tool at a free tier with more access and features introduced in paid tiers.

Generative Art:

Generative art is a form of art that is created using a computer program or algorithm to generate visual or audio output. It often involves the use of randomness or mathematical rules to create unique, unpredictable, and sometimes chaotic results.

Generative Pre-trained Transformer(GPT):

GPT stands for Generative Pretrained Transformer. It is a type of large language model developed by OpenAI.

GitHub:

GitHub is a platform for hosting and collaborating on software projects


Google Colab:

Google Colab is an online platform that allows users to share and run Python scripts in the cloud

Graphics Processing Unit(GPU):

A GPU, or graphics processing unit, is a special type of computer chip that is designed to handle the complex calculations needed to display images and video on a computer or other device. It's like the brain of your computer's graphics system, and it's really good at doing lots of math really fast. GPUs are used in many different types of devices, including computers, phones, and gaming consoles. They are especially useful for tasks that require a lot of processing power, like playing video games, rendering 3D graphics, or running machine learning algorithms.

Large Language Model(LLM):

A type of machine learning model that is trained on a very large amount of text data and is able to generate natural-sounding text.

Machine Learning(ML):

A method of teaching computers to learn from data, without being explicitly programmed.

Natural Language Processing(NLP):

A subfield of AI that focuses on teaching machines to understand, process, and generate human language

Neural Networks:

A type of machine learning algorithm modeled on the structure and function of the brain.

Neural Radiance Fields(NeRF):

Neural Radiance Fields are a type of deep learning model that can be used for a variety of tasks, including image generation, object detection, and segmentation. NeRFs are inspired by the idea of using a neural network to model the radiance of an image, which is a measure of the amount of light that is emitted or reflected by an object.

OpenAI:

OpenAI is a research institute focused on developing and promoting artificial intelligence technologies that are safe, transparent, and beneficial to society

Overfitting:

A common problem in machine learning, in which the model performs well on the training data but poorly on new, unseen data. It occurs when the model is too complex and has learned too many details from the training data, so it doesn't generalize well.

Prompt:

A prompt is a piece of text that is used to prime a large language model and guide its generation

Python:

Python is a popular, high-level programming language known for its simplicity, readability, and flexibility (many AI tools use it)

Reinforcement Learning:

A type of machine learning in which the model learns by trial and error, receiving rewards or punishments for its actions and adjusting its behavior accordingly.

Spatial Computing:

Spatial computing is the use of technology to add digital information and experiences to the physical world. This can include things like augmented reality, where digital information is added to what you see in the real world, or virtual reality, where you can fully immerse yourself in a digital environment. It has many different uses, such as in education, entertainment, and design, and can change how we interact with the world and with each other.

Stable Diffusion:

Stable Diffusion generates complex artistic images based on text prompts. It’s an open source image synthesis AI model available to everyone. Stable Diffusion can be installed locally using code found on GitHub or there are several online user interfaces that also leverage Stable Diffusion models.

Supervised Learning:

A type of machine learning in which the training data is labeled and the model is trained to make predictions based on the relationships between the input data and the corresponding labels.

Unsupervised Learning:

A type of machine learning in which the training data is not labeled, and the model is trained to find patterns and relationships in the data on its own.

Webhook:

A webhook is a way for one computer program to send a message or data to another program over the internet in real-time. It works by sending the message or data to a specific URL, which belongs to the other program. Webhooks are often used to automate processes and make it easier for different programs to communicate and work together. They are a useful tool for developers who want to build custom applications or create integrations between different software systems.


מילון מונחים


ממשק תכנות יישומים (API): API, או ממשק תכנות יישומים, הוא קבוצה של כללים ופרוטוקולים המאפשרים לתוכנות שונות לתקשר ולהחליף מידע ביניהן. הוא פועל כמעין מתווך, המאפשר לתוכניות שונות לקיים אינטראקציה ולעבוד יחד, גם אם הן אינן בנויות באמצעות אותן שפות תכנות או טכנולוגיות. ממשקי API מספקים דרך לתוכנות שונות לדבר ביניהן ולשתף נתונים, ועוזרות ליצור חווית משתמש מקושרת יותר וחלקה יותר. בינה מלאכותית (AI): האינטליגנציה שמוצגת על ידי מכונות בביצוע משימות הדורשות בדרך כלל אינטליגנציה אנושית, כגון למידה, פתרון בעיות, קבלת החלטות והבנת שפה. AI מושגת על ידי פיתוח אלגוריתמים ומערכות שיכולים לעבד, לנתח ולהבין כמויות גדולות של נתונים ולקבל החלטות על סמך הנתונים הללו. Compute Unified Device Architecture (CUDA): CUDA היא דרך שבה מחשבים יכולים לעבוד על בעיות קשות וגדולות באמת על ידי פירוקן לחתיכות קטנות יותר ופתרון כולן בו זמנית. זה עוזר למחשב לעבוד מהר יותר וטוב יותר על ידי שימוש בחלקים מיוחדים בתוכו הנקראים GPUs. זה כמו כשיש לך הרבה חברים שעוזרים לך לעשות פאזל - זה הולך הרבה יותר מהר מאשר אם אתה מנסה לעשות את זה לבד. המונח "CUDA" הוא סימן מסחרי של NVIDIA Corporation, אשר פיתחה והפכה את הטכנולוגיה לפופולרית. עיבוד נתונים: תהליך הכנת נתונים גולמיים לשימוש במודל למידת מכונה, כולל משימות כמו ניקוי, שינוי ונימול של הנתונים. למידה עמוקה (DL): תת-תחום של למידת מכונה המשתמש ברשתות עצביות עמוקות עם רבדים רבים כדי ללמוד דפוסים מורכבים מנתונים. הנדסת תכונות: תהליך הבחירה והיצירה של תכונות חדשות מהנתונים הגולמיים שניתן להשתמש בהם כדי לשפר את הביצועים של מודל למידת מכונה. Freemium: ייתכן שתראה את המונח "Freemium" בשימוש לעתים קרובות באתר זה. זה פשוט אומר שלכלי הספציפי שאתה מסתכל עליו יש אפשרויות חינמיות וגם בתשלום. בדרך כלל יש שימוש מינימלי מאוד, אך בלתי מוגבל, בכלי בשכבה חינמית עם יותר גישה ותכונות שהוצגו בשכבות בתשלום. אמנות גנרטיבית: אמנות גנרטיבית היא צורה של אמנות שנוצרת באמצעות תוכנת מחשב או אלגוריתם ליצירת פלט חזותי או אודיו. לרוב זה כרוך בשימוש באקראיות או בכללים מתמטיים כדי ליצור תוצאות ייחודיות, בלתי צפויות ולעיתים כאוטיות. Generative Pre-trained Transformer(GPT): GPT ראשי תיבות של Generative Pre-trained Transformer. זהו סוג של מודל שפה גדול שפותח על ידי OpenAI. GitHub: GitHub היא פלטפורמה לאירוח ושיתוף פעולה בפרויקטי תוכנה

Google Colab: Google Colab היא פלטפורמה מקוונת המאפשרת למשתמשים לשתף ולהריץ סקריפטים של Python בענן Graphics Processing Unit(GPU): GPU, או יחידת עיבוד גרפית, הוא סוג מיוחד של שבב מחשב שנועד להתמודד עם המורכבות חישובים הדרושים להצגת תמונות ווידאו במחשב או במכשיר אחר. זה כמו המוח של המערכת הגרפית של המחשב שלך, והוא ממש טוב לעשות הרבה מתמטיקה ממש מהר. GPUs משמשים סוגים רבים ושונים של מכשירים, כולל מחשבים, טלפונים וקונסולות משחקים. הם שימושיים במיוחד למשימות הדורשות כוח עיבוד רב, כמו משחקי וידאו, עיבוד גרפיקה תלת-ממדית או הפעלת אלגוריתמים של למידת מכונה. מודל שפה גדול (LLM): סוג של מודל למידת מכונה שאומן על כמות גדולה מאוד של נתוני טקסט ומסוגל ליצור טקסט בעל צליל טבעי. Machine Learning (ML): שיטה ללמד מחשבים ללמוד מנתונים, מבלי להיות מתוכנתים במפורש. עיבוד שפה טבעית (NLP): תת-תחום של AI המתמקד בהוראת מכונות להבין, לעבד וליצור שפה אנושית רשתות עצביות: סוג של אלגוריתם למידת מכונה המבוססת על המבנה והתפקוד של המוח. שדות קרינה עצביים (NeRF): שדות קרינה עצביים הם סוג של מודל למידה עמוקה שיכול לשמש למגוון משימות, כולל יצירת תמונה, זיהוי אובייקטים ופילוח. NeRFs שואבים השראה מהרעיון של שימוש ברשת עצבית למודל של זוהר תמונה, שהוא מדד לכמות האור שנפלט או מוחזר על ידי אובייקט. OpenAI: OpenAI הוא מכון מחקר המתמקד בפיתוח וקידום טכנולוגיות בינה מלאכותית שהן בטוחות, שקופות ומועילות לחברה. Overfitting: בעיה נפוצה בלמידת מכונה, שבה המודל מתפקד היטב בנתוני האימון אך גרועים בחדשים, בלתי נראים. נתונים. זה מתרחש כאשר המודל מורכב מדי ולמד יותר מדי פרטים מנתוני האימון, כך שהוא לא מכליל היטב. הנחיה: הנחיה היא פיסת טקסט המשמשת לתכנון מודל שפה גדול ולהנחות את הדור שלו Python: Python היא שפת תכנות פופולרית ברמה גבוהה הידועה בפשטות, בקריאות ובגמישות שלה (כלי AI רבים משתמשים בה) למידת חיזוק: סוג של למידת מכונה שבה המודל לומד על ידי ניסוי וטעייה, מקבל תגמולים או עונשים על מעשיו ומתאים את התנהגותו בהתאם. מחשוב מרחבי: מחשוב מרחבי הוא השימוש בטכנולוגיה כדי להוסיף מידע וחוויות דיגיטליות לעולם הפיזי. זה יכול לכלול דברים כמו מציאות רבודה, שבה מידע דיגיטלי מתווסף למה שאתה רואה בעולם האמיתי, או מציאות מדומה, שבה אתה יכול לשקוע במלואו בסביבה דיגיטלית. יש לו שימושים רבים ושונים, כמו בחינוך, בידור ועיצוב, והוא יכול לשנות את האופן שבו אנו מתקשרים עם העולם ואחד עם השני. דיפוזיה יציבה: דיפוזיה יציבה מייצרת תמונות אמנותיות מורכבות המבוססות על הנחיות טקסט. זהו מודל AI של סינתזת תמונות בקוד פתוח הזמין לכולם. ניתן להתקין את ה-Stable Diffusion באופן מקומי באמצעות קוד שנמצא ב-GitHub או שישנם מספר ממשקי משתמש מקוונים הממנפים גם מודלים של Stable Diffusion. למידה מפוקחת: סוג של למידת מכונה שבה נתוני האימון מסומנים והמודל מאומן לבצע תחזיות על סמך היחסים בין נתוני הקלט והתוויות המתאימות. למידה ללא פיקוח: סוג של למידת מכונה שבה נתוני האימון אינם מסומנים, והמודל מאומן למצוא דפוסים ויחסים בנתונים בעצמו. Webhook: Webhook הוא דרך של תוכנת מחשב אחת לשלוח הודעה או נתונים לתוכנית אחרת דרך האינטרנט בזמן אמת. זה עובד על ידי שליחת ההודעה או הנתונים לכתובת URL ספציפית, השייכת לתוכנית האחרת. Webhooks משמשים לעתים קרובות כדי להפוך תהליכים לאוטומטיים ולהקל על תוכניות שונות לתקשר ולעבוד יחד. הם כלי שימושי למפתחים שרוצים לבנות יישומים מותאמים אישית או ליצור אינטגרציות בין מערכות תוכנה שונות.

WELCOME TO THE

5 STAR AI.IO

TOOLS

FOR YOUR BUSINESS