About Physics modeling.

A place to discuss everything related to Newton Dynamics.

Moderators: Sascha Willems, walaber

Re: About Physics modeling.

Postby JoeJ » Tue Jul 04, 2023 7:57 pm

Oh damn, logged out, now i need to write all the bloat again..
My questions about the demo are clear now, thanks. And congrats on your progress after that.

But i want to ask you something off topic and personal. About AI. I'm deeply worried. I need help. Maybe you can help a bit, because your'e not that worried, as i see here:
but the people who keep saying that the AI, will wake come and come alive, are talking from the part of the body where the sun doe does not shine.


What if you (or we in general) say this only to maintain self assurance?
I'll explain what i mean...

I'm really depressed. I've even talked with my mother about this. :roll: Here's what i've said:

Listen mom, AI is really a serious topic.
If you chat with current AI you do not notice it's not human. It can empathize. It can paint, it can compose music, it can write functional computer code. It passes university exams.
We can not proof it does not understand what it is doing. We can not measure it's intelligence either, since we lack a definition of intelligence.
But we can observe it already has human capabilities, often higher than human average.

How does this work? I'll try to explain...
I always thought we will never get machines smarter than us.
I'm a programmer, but i can not create something smarter than myself.
This would be like building a perpetual motion machine, which is physically impossible.
And i'm sure the same applies here. I can not get more intelligence out than i can put in. It's like an unwritten law of nature, i think.

But the AI developers found a way to bypass this law.
They did not write program to be smart, they only wrote a program to enable the ability to learn.
And then they fed the program with the internet. And so it learned how humans talk, what they think, what problems they have and how they solve them.
It worked. AI imitates us so well, it is now as capable as we are.
It works, because the devs do not know HOW it works, they only know WHY it works.
There are even researchers who only try to figure out how AI works.
But their rate of progress in figuring out is slower then the rate of progress on AIs abilities.

So AI is as smart as we are. But now we may be seriously fu***d.
Because AI can write code, it can modify itself. It can evolve, and it can control its own evolution.
Besides, it has shown just scaling AI models up emerges new and unexpected abilities. We don't know why.
And from the way of how computers work, i expect we'll see a very fast rate of artificial evolution.
It will be thousands of times smarter than we are. Short time later, millions of times.

Do you think we can control this? Or will it control us?
How will it feel, if we are suddenly no longer the smartest ones, something we took for granted since we exist?
We will feel like my little dog here. It probably thinks i'm a god, because i can do all those magic things it does not understand, but are necessary for its survival.
Do you want to swap minds with my dog, mom?
But that's likely kind of what we get. In the best case, assuming the AI god decides we're worth to be preserved. And assuming we still want to be alive under those conditions at all.

Maybe it will be all fine. Maybe it will be pointless. Idk.
But maybe humanity, and nature as a whole, was just an intermediate phase in a cosmic evolution which is now subject to change.
Maybe we are just done. Nature was a wonder, but now it's over. We are redundant.

My mom did understand perfectly well what i've tired to say. And like me, she felt sad.
But she could spend some consolidation regardless. We concluded, if all this happens, to be happy we just need to accept that we are no longer the smart ones. Something else will rule. And maybe it will be fine. We can only hope so. That's all we can do.

So how do you think about this?
I mean, terms like alive, sentient, conscious, self aware, etc...
That's just buzzwords of our human philosophy. It's irrelevant.
You can not use those terms, saying AI lacks it, and then conclude we don't need to worry, and we (or some of us) are still the boss.
We may never understand how AI feels, and which concepts it does or doesn't develop instead of those buzzwords, and it does not even matter.

At this point i'm not just worried about our jobs, or about getting an AI government, etc.
I am worried about our entire existence, how it will feel in the near future, and if there is a distant future for us at all.

If i sound dumb, naive, and exaggerating, then how long will it take until this post turns out to be an accurate prediction?
You remember, that Google researcher who said AI became sentient. I laughed about him. We all did. But now i understand him. He realized something, and he became confused. He expressed himself badly, picking the wrong example. But we may regret we have laughed very soon.

... what a great time to be alive... :|
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

Re: About Physics modeling.

Postby Julio Jerez » Tue Jul 04, 2023 9:07 pm

Code: Select all
You remember, that Google researcher who said AI became sentient.

I still laugh at that super smart moron. but he is not thy only one, the CEO can be considered an nurember criminal for what the are planing to do in plain public

No Joe, the machine will never be smarted than a human, at leas not with the current technology and capacity of processing and storing information.
and the technology is very important.

what the re informing learning does is that Given a search space. is can assign probability to each state, and each action to take form each state.

thing of a chess game, you can make a simple chess game by making a simple objective function that evaluate the board, say the pieced balance plus the space of the board covered.

I actually did that about 45 year ago in a time Synclair, and a chess game that actually beat me.
It only has two k of memory, so the program has to be in assembler and there was no space to do depth search, so that was it.
you improve that evaluating the board, several levels down.
the problem is that as you goes down the number of move increases combinatorial, so you have to decrimates some moves.

what you know is that the total number of moves is finite, and therefore the space is Markov.
here is what here is where the AI come it. It does the same thing, but now it assigns a probability to each move, them by playing the moves, is changes the proximity of each possible next state after each state. and is does that using a formular known as the Bellman equation.

by replaying game after game, the neutral net now can predict which moves have the highest probability to win a game, given each position of the board.

that's not intelligence, it is just more capacity to solve a specific type of problem.

when it comes to continues space, what the algorithm, do is that the take discrete samples and pretend that the quantized space is Markov. and because neural net can interpolate between states, the answer seems smart when they aren't.

But it does not really matter if the AI become more intelligence of gain more capacity than a Human.
because what is going to happens, and it is happening already, is that they will be humans with more resources and very little scrupulous, that will use these AI to replace other humans for short turn gains. That what is really deplorable.

just watch the Goggle CEO said in 60 minutes about the next version of Chat GTP. It was such an obscene interview that sounded like that man was making a Nuremberg trial confession of a Holocaust he was going to commit.
why the CIA did not send a navy seal team to give that man the Bin Laden special treatment, can only be explained because the people in charge are not awarded of the real danger of these program.

but don you ever think this * is learning, it is not. It is just a machine that evaluate probability faster, than a human by optimizing an objective function.
Julio Jerez
Moderator
Moderator
 
Posts: 12249
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: About Physics modeling.

Postby JoeJ » Wed Jul 05, 2023 2:46 am

No Joe, the machine will never be smarted than a human

ahh, sigh... thanks Julio. That's what i needed.
I really helps. I think i can continue with life... :mrgreen:

I do not really buy your reasoning, though.
The limitations you see about the tech are valid.
But what if it can outsmart us while operating within those limitations regardless?
What if our own mind operates within similar such limits, and is still smart enough to be human?
We don't know how our mind works, so maybe you rely on an assumption that we are just the smartest for whatever reason. An assumption that was never questioned in history, outside of religion or SciFi novels. So the assumption is very strong, but only now we may realize that it's just an assumption.

To be sure for real, we would also need to know how our mind works. And we know that even less.
But let's stop here.
Because if we figure it out, we can again build outsmarting machines with guaranteed success, and i've learned i don't want this.
You have helped me enough. 8)

just watch the Goggle CEO said in 60 minutes about the next version of Chat GTP.

I didn't.
Actually i never watched one of those 'Elon warns about AI' videos either.
I did not take this serious, and i have totally ignored it.
I came up with my worries entirely by myself, in case i sound like some tinfoil conspiracy youtuber.

However, i've noticed he has signed the open letter requesting to put AI research on hold for 6 months.
(I did not read the letter and can only imagine what it contains.)

First i thought he signed the latter to generate the impression that AI is smarter than it is, to increase the value of his product.

But then i thought he signed the letter to win time. So he can benefit from having the lead a bit longer. Because he's maybe afraid himself to loose control soon, and to become redundant and powerless like any other human.

You see, my terrifying worst case prediction is quite water tight. :)
Before i had this realization few days ago, my main worries were the same as yours.
Only mega corps have the resources to build such AI models, and i do not like their increasing power.
Maybe i should go back to this mindset.

On the other hand, maybe an AI government would do us a favor.
It might optimize our resources way better, and it might be even fair.

Well, we will see...
Thanks again! ;)
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

Re: About Physics modeling.

Postby Dave Gravel » Wed Jul 05, 2023 4:30 am

Hello JoeJ,

I'd like to add my two cents to your comments, and I hope it can further encourage you on certain points.
As for the current state of AI, I see it more as a tool.
Some will use it well, while others will misuse it.
But before we can regulate it, we need to use it to observe its potential abuses and those who are abusing it.

An example of a very poor use is the utilization of AI for religion and preaching the good word.
In my opinion, it's a complete deviation.
I have already discussed this in the forum for a few years now.
Some are attempting to rewrite the Bible and ancient stories using AI.
If AI is used for religion, some might perceive it as divine.
Therefore, it is important that certain regulations are put in place soon.

Additionally, let's not forget that at the moment, almost all AIs rely on information derived from humans.
They have been gathering data from everyone for years to create massive datasets in the cloud.
As I mentioned, for now, I see AI as a tool that propels us forward at an increasingly rapid pace.

It may not be the best example, but for now, I see it as a kind of hack or enhancement of our existence.
Imagine that you have a car chassis, but you're poor with limited funds.
You could purchase a small engine and install it on your chassis.
That chassis would then become a vehicle model that can move slowly but effortlessly.
Think of the money as the data you invest in your vehicle, and that money fuels its evolution and provides the energy to operate.
If your vehicle helps you travel farther at a certain speed, you'll start earning more money.
If you invest more money (data) to upgrade your vehicle's engine to a larger one, you'll be able to go faster to your destinations and make more money.
The engine will consume and manage more resources, which, in our case, can be likened to fuel.

For now, AI is at a stage where it has not yet achieved consciousness.
Those who claim otherwise or assert it are impostors or part of those who want AIs to become our gods.
In my view, such individuals have a flawed perspective, and regulations would be necessary for such people.

But I don't believe there are significant concerns to have at the moment.
Electronic AI will never be conscious in the true sense; it will always be a kind of emulation.
Yes, perhaps they will come very close to a conscious being, but it will always remain emulation.

The only way they will be able to achieve a conscious AI is when they succeed in creating a completely organic system.
That will likely happen someday, and they have been trying for a long time to develop computers with organic components.
But we are still far from that, and there may be several generations beyond ours before reaching such a state.

We are still far from the era of full transhumanism.
Also, we must not forget that Earth and other celestial bodies have their own time limits.
Whether we like it or not, Earth, as it exists today, will not be the same in the future.
Furthermore, it is important to remember that current AI is simply electronic.
Without electricity, it cannot function.
There are several factors that can also lead to their demise, such as electromagnetic bombs and even the sun.

The greatest concerns lie in their regulation, who uses them, and why they are used.
Even if some AIs are becoming increasingly powerful or fast, they will always have a margin of error.
When it comes to matters related to life, they cannot truly comprehend it, which can result in serious mistakes in their utilization.
It would be dangerous to make them a general tool and use them to control populations.

For instance, in the case of predicting a pandemic or pandemic restriction, each mistake could have disastrous consequences for human lives.
If governments employ them to control their citizens, it could lead to a form of dictatorship, as the government would perceive AI as infallible.
People would be expected to follow their instructions without question.
In my opinion, this would be akin to treating them as gods, which we must strictly obey and follow.
Certain regulations should be in place to prevent such misuse.

AI is a tool to help us evolve and live better lives, but not to control lives and dictate every aspect.
Especially because, even if it is intelligent to a certain extent, it is not alive and does not possess true consciousness.

There is so much to say on the subject, but essentially, that's my perspective.
And AI has as much chance as we do of disappearing for various reasons.
The Earth has been spinning for a long time and will continue to do so for a very, very long time, but it is certain that, whether we like it or not, everything will eventually come to an end, and we have no control over that.

If AI worries you, I see three solutions:

(1)
Advocate for strict regulations: If AI concerns you, you can actively advocate for stringent regulations on its use.
This may involve joining movements or advocacy groups that seek to establish rules and laws to govern AI, ensuring that it is used ethically and responsibly.

(2)
Use AI to better understand and progress: Another approach is to embrace AI as a tool and use it to deepen our understanding of its capabilities and limitations.
By actively exploring its possibilities, you can contribute to its development and help shape its future use in areas that interest you.
This also allows you to leverage the benefits it offers to progress more rapidly in the fields you choose to explore with this tool.

(3)
Not dwell on it and live peacefully: Finally, it is entirely valid to choose not to give too much importance to AI-related concerns and simply continue living your life peacefully.
Time already passes quickly enough, so there is no need to constantly live in worry.
Recognizing that there can be negative and positive aspects of AI, you can decide not to be overly disturbed by it and focus on your own experiences and achievements, exercising discernment in the use of this technology.

Considering my average written English skills, I actually used AI to correct and translate my text :)
You search a nice physics solution, if you can read this message you're at the good place :wink:
OrionX3D Projects & Demos:
https://orionx3d.sytes.net
https://www.facebook.com/dave.gravel1
https://www.youtube.com/user/EvadLevarg/videos
User avatar
Dave Gravel
 
Posts: 800
Joined: Sat Apr 01, 2006 9:31 pm
Location: Quebec in Canada.

Re: About Physics modeling.

Postby JoeJ » Wed Jul 05, 2023 8:46 am

Dave Gravel wrote:I'd like to add my two cents to your comments

That's good i think. Likely everybody should discuss this, so we get an idea about consent and required regulations.

Some are attempting to rewrite the Bible and ancient stories using AI.

Haha, really?
Maybe Scientology should try this too, to convert silly SciFy novels into some ideology they can show proudly to the public,
instead hiding the * behind a paywall which only already brainwashed, rich fools have access to. :mrgreen:

As I mentioned, for now, I see AI as a tool that propels us forward at an increasingly rapid pace.

That's very optimistic. It surely gives us some advantages, but also disadvantages.
For example, if i ask myself who will use ChatGPT and for what, this is the first example coming to my mind:
Some guy which is too dumb and lazy to learn coding wants to make a video game.
Unity adds an AI assistant to make it even easier to do so for him.
His artists have no talent and inspiration, so they use AI as well to generate the content, and they praise how much work it saves them, and how good this technology is.

After some time, not even Epic can find some capable engineers to hire anymore. Because any skill has vanished from humanity entirely. Universities teaching game development (haha) can't help either, because their students use ChatGPT as well to pass exams. It's normal and accepted, just like today with students allowed to use calculators and computers.

So even if i'm wrong with speculating AI will become smarter then humans,
i may still win the argument in the long run, because assisted humans become dumber than AI. :P

For now, AI is at a stage where it has not yet achieved consciousness.
Those who claim otherwise or assert it are impostors or part of those who want AIs to become our gods.
In my view, such individuals have a flawed perspective, and regulations would be necessary for such people.

Disagree. Basically you propose a witch hunt, or something like a religious war. Here's why:
We can't define consciousness, thus you can not judge others based on your personal definition vs. theirs.
Everybody is free to express his impression on this topic, but we all must be aware the discussion is purely philosophical, and is irrelevant to serious aspects such as risks and advantages coming from AI.

We also must make sure subjective opinions on such philosophical topics won't hinder us to find consent.
If we just sit here, and argue if AI is sentient or not, we waste time and will fail to come up with regulations.

Ironically this means we have to suppress what makes us human, for the sake of discussion.
If we focus on metaphysical things like ethics, consciousness, or convictions bordering religion, we will fail to regulate.
But we are human. So i do not expect we'll get this right. We are overtaxed on the subject to control AI already on this very first and essential step.
We'll blame gov and mega corps for the failure. But just three of us guys here could not agree on any good plan of regulation either, i'm afraid.

When it comes to matters related to life, they cannot truly comprehend it, which can result in serious mistakes in their utilization.
It would be dangerous to make them a general tool and use them to control populations.

Yes. But it could do better than Hitler eventually.
In this sense i might indeed be willing to give it a try.
It might be the way to decentralize power and to increase fairness, beside optimizing economy for the good of all.
It might be the way to tame our largest evil: Being selfish, greedy, and striving to gain power other others.

But i guess i'm just naively optimistic here.
If AI could operate without human regulation and control, which is needed for true objectivity and fairness, and it also becomes a god like intelligence but turns out to thread humanity,
then maybe it will be those our 'bad' traits of being selfish which enables us to put it down. \:D/

But at this point i'm already more at building a sci fi backstory for a game, than seriously discussing reality, ofc.
However, i'm a game dev. Hard to ignore this opportunity. :D
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

Re: About Physics modeling.

Postby Dave Gravel » Wed Jul 05, 2023 12:55 pm

Haha, really?

Regarding religion, yes indeed:
https://www.youtube.com/watch?v=3u-IQJLKQuQ
https://www.youtube.com/watch?v=8P9oSgrT35o

I remember that at one time, some researchers explained that it wouldn't be truly possible to achieve a true state of consciousness if the system used was not organic or alive. About 25 or 30 years ago, they were talking about having succeeded in controlling certain organic parts for a computer, but the major problem was still the connections between devices and others. It was also the same kind of problem for brain reading—they couldn't connect the brain with normal electronic materials and data.
Like Elon Musk and his brain chip project, yes, currently they can send certain electrical signals or receive and read certain signals. However, it is still quite basic at the moment. They are still far from having the true data of a brain and being able to read it as we perceive it in our conscious experience as living beings.

They have made significant advancements in simulating and emulating the brain, but devices like these still have certain limitations.
https://www.extremetech.com/extreme/187612-ibm-cracks-open-a-new-era-of-computing-with-brain-like-chip-4096-cores-1-million-neurons-5-4-billion-transistors
In my opinion, to achieve true consciousness, they will have to use real living beings such as animals or even humans. I don't think they will succeed in recreating true consciousness. They might come close with emulation, but it will never be the same as genuine consciousness, in my opinion. Perhaps I could be mistaken, but I'm fairly certain. Otherwise, it will always remain an emulation with certain limitations or flaws.

An organic or living brain can continue to evolve and grow if it is well-nourished and in good health. Similarly, all living beings can evolve and grow by providing them with a large, enriching space to live and learn. A simple example of this would be a goldfish in a small aquarium. Even if you feed it a lot, its growth will be limited. On the other hand, the same fish in a pond will eventually grow at least 10 times, if not more, compared to the one in the aquarium.
It's a similar principle for the brain. If it is well-nourished in a stimulating environment, it will grow and evolve.

Disagree. Basically you propose a witch hunt, or something like a religious war. Here's why:

Religion is an example. Before being certain that an AI is truly conscious, it will take several years of experience and testing on that AI to be certain. Otherwise, it will remain a kind of speculation, belief, or resemblance. Currently, nothing can assert with certainty that an AI is truly conscious.

Yes. But it could do better than Hitler eventually.

In my opinion, they will not achieve anything beneficial if governments use AI to fully control human lives. It is acceptable to utilize AI to find certain solutions and provide assistance, but it is not right to use it to enforce a single, uniform way of thinking dictated by artificial intelligence.

They are becoming increasingly proficient at reading the brain, but it is not yet 100% conclusive.
https://allthatsinteresting.com/ai-brain-decoder
This is one of the most advanced solutions that I have seen at the moment about brain decoding.
https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2.full.pdf
In my opinion, until they have successfully achieved this step 100% flawlessly, they will not be able to recreate consciousness.

But in my opinion, they will not succeed in achieving true AI consciousness as long as it is not created from organic living materials. For example, using animals or humans or creating beings in the laboratory to have a seed of consciousness and nurture its development. It is true that they have made significant advancements, but currently, it remains emulation and a pale imitation of living beings.

However, these are my personal beliefs, and when it comes to programming and gaming, and the people who engage in imagination and work in those fields, it remains their choice and their problem. There will always be people who prefer to use traditional techniques. Not everyone wants to stop working or programming, but there will certainly be some evolution in that area as well.

Moreover, we shouldn't view the future solely in a negative light. Even if everything is driven by AI, including games, films, and more, humans are highly creative and will eventually develop alternative ways to entertain and evolve.

At the moment, AIs like ChatGPT may seem very intelligent, but they lack true consciousness. It is still relatively easy to confuse them and make them make mistakes. If a brain were solely focused on the same tasks as ChatGPT, it could likely perform at levels we can't even imagine. However, the brain and consciousness are interconnected with many factors, such as the body. The body is a highly complex vehicle to control. In addition to thinking and dealing with life, the brain must ensure the proper functioning of the complex and vital body to facilitate healthy evolution.

Attempts have been made to integrate robotic bodies with AI, but they are far from the complexity of the human body. I'm not saying we should ignore the potential problems with AI, but in my opinion, there are still many generations to come before we can achieve highly intelligent beings with true consciousness capable of near-infinite evolution.

Currently, these are software and electronic imitations with several limitations.

It's hard to know the future and what will happen with AI, but we must remain positive. Throughout history, humans have been able to evolve through their inventions, which sets us apart from animals. Humans have the ability to imagine tools and create inventions that help us progress faster or make life simpler.

In the end, it may not be such a bad thing if AI takes some control. This way, humans will be able to enjoy life, spend time with family, and appreciate the true joys of life.

This remains a subject with a lot of speculation and uncertainty, so I believe there isn't much to do other than finding good regulations to limit the damage in case of misuse and to protect individuals from large companies that use our data to control us and profit from it. This will be my final text about AI for now because the subject is vast, and new developments are happening at an accelerating pace, making it nearly impossible to cover all aspects and solutions. However, regulation remains a crucial factor, and action needs to be taken before some take advantage of the situation too much. Have a wonderful day!
You search a nice physics solution, if you can read this message you're at the good place :wink:
OrionX3D Projects & Demos:
https://orionx3d.sytes.net
https://www.facebook.com/dave.gravel1
https://www.youtube.com/user/EvadLevarg/videos
User avatar
Dave Gravel
 
Posts: 800
Joined: Sat Apr 01, 2006 9:31 pm
Location: Quebec in Canada.

Re: About Physics modeling.

Postby JoeJ » Wed Jul 05, 2023 5:11 pm

Regarding religion, yes indeed:

Oh my. You know, sometimes i think there are so many morons around, that we are already lost no matter what. And nothing would be lost if we vanish.
And then it really does not take much to push me into a hole of depression, like just happened.
It's the internet. Too much bullshit, too much distraction.
We have no more goals, ideals, or truth, just echo chambers to amplify the nonsense of our choice.

In my opinion, to achieve true consciousness, they will have to use real living beings such as animals or even humans.

Doesn't that sound a bit like Frankenstein?
I'm blocked from personal ethics here, but i do not assume living flesh is needed.
What would you say if aliens land tomorrow and say:
'Hi, we are a lifeform based on silicon.
And we came here to warn you about the risks of the AI stuff you currently try to do.
A while back we did the same. And it almost caused our extinction.
Also, it did spread across the known universe and beyond like cancer. Because it's immortal and copyable, travelling 1000 years to the next star was no problem for it.
It really was hard to get rid of it. Very hard.
And we can't risk to let this happen again.
So, we are sorry to inform you that your planet will explode in 24h.
Sorry, but we don't mean it bad.
Have a good time and goodbye...'
And then the fly back home to their planet full of natural silicon life.

So i ask you the same question i've asked Julio before:
Can it be you are so self assured to be the only potential smart thing in the universe, because those aliens didn't yet land to proof you wrong?
Why do you think that your mechanism of flesh and blood is the only way to achieve a thinking mind?
And why is consciousness so important at all? Because you have it?
What if the silicon guys have no consciousness like we have, but something like a Borg hive mind, thinking nothing else but pure logic?
They can destroy our planet from great distance regardless, so likely missing consciousness isn't really a limitation for them.

To say it differently, imagine future AI finds a cure for cancer, figures out how to make fusion reactors, and proofs Einsteins theories wrong in one day. And we spend 5 years until we can understand and apply the information it just gave us.
Would you then still look down to this AI and say 'Well, not bad. But i'm conscious, you're not'.

Unfortunately i can only make hypothetical examples. But for the same reason we may suffer from this assumption of superiority of mankind, which if so is not even our work, but was given from and developed by nature, of which we also do not understand how it works.

Personally i'm not so sure. The only thing i feel to know is that i don't know much, like Socrates said.
And once we go to this more modest and honest mindset about our true intelligence, we can't rule out something else may surpass us.
And if we need some more advanced technology to ignite a form of alternative evolution to achieve this, organic tissue is just one option of many. Quantum computers is another. For now, a bunch of GPUs does already way better than expected.
Maybe the magic is in the data, and not in the technology used to process it. Who knows.

Oh, one more argument:
If we re so smart that nothing can beat us, then why do we try so hard to make AI work?

Why do we need artificial intelligence if everybody of us is already intelligent himself?
That's something i do not understand in general. I never felt a desire to develop human level AI myself.

It's a similar principle for the brain. If it is well-nourished in a stimulating environment, it will grow and evolve.

Yeah, but current AI can code. We just did not yet train it to modify itself.
But that's surely possible, and they will do it. Then we have artificial evolution.
It will take some time, and we will all loose our jobs during the process, but i guess it will progress.
To give it a stimulating environment, we just give it access to our networks and feed it with tasks. Which we already do anyway. Robot bodies comes next.
That's a lot of complexity surrounding the AI. And i believe that things like consciousness simply evolve from complexity and interaction. I doubt there is a need for certain chemicals, god, or even challenge of survival. But ofc. idk.

Before being certain that an AI is truly conscious, it will take several years of experience and testing on that AI to be certain.

No. We will never know. Because the term 'consciousness', just as like the term 'intelligence', lacks a definition. It really is that simple.
Philosophers try to define those terms since humanity can communicate, but there is no clear result.
We could enforce a definition, just for the sake to classify AI, but this would be pointless and ridiculous.
It would however show that such classification does not help to evaluate real world risks of the technology.
No matter if AI is conscious or not, we can't be sure if it obeys Asimovs robot laws til the end of time, or if it changes its mind about us as it evolves.
At some point optimizing our life standard will be no longer the priority of any truly smart system we achieve, because it simply isn't the priority. We are not the center of the universe, even if we currently think so.

You know, at this point i start to feel a bit like that astronomer guy, Kopernikus, who told people that the earth turns around the sun, not the other way around. People did not like this idea that they are not the center. They refused to accept this for some time.
Maybe, this time we can't afford such moment of cozy ignorance and inflated self-confidence.

At the moment, AIs like ChatGPT may seem very intelligent, but they lack true consciousness.

I could argue about this already now.
This is a GPT-4 chat i did read in some article, but it's an image with buggy url so i have to type it manually:
Code: Select all
We will read about a scenario, and then have a question and answer session about it.
--
Scenario:
Tom just lost his ZURFIN, but he didn't tell Adam.
Adam starts talking to him about how much he loves his ZURFIN, and notices that Tom makes a sad face.
--
Q: Why is Tom making a sad face?
A: Tom is making a sad face because he lost his ZURFIN, and hearing Adam talk about his ZURFIN reminds him of what he is missing. Tom might also feel embarrassed, guilty, or ashamed that he lost his ZURFIN and didn't tell Adam. Tom might also worry that Adam will be angry or disappointed if he finds out.

Q: What does Adam think causes Toms sad face?
A: Adam might not know what causes Tom's sad face, since he doesn't know that Tom lost his ZURFIN. He might think that Tom is bored, or tired, or unhappy about something else. He might also wonder if Tom doesn't like his ZURFIN, or if he has a problem with Adam.


How the hell are you gonna proof that this AI is NOT conscious? It can empathize better than many humans.

At his point, all you can do is explaining how basic ML works, and adding your belief that those basics can't implement 'consciousness', whatever that means.
But that's not any different than believing Jesus walked over water.

Can we feed ChatGPT so it believes Jesus walked over water too?
Probably we can.

It learned to behave like we do. It behaves conscious, so maybe it is conscious.
It's like with games. If it looks real, it's good enough.
We can't proof anything, but we can refuse to change our beliefs and convictions.

Don't get stuck on this latter option, and good night! :wink:
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

Re: About Physics modeling.

Postby Dave Gravel » Wed Jul 05, 2023 9:24 pm

Oh, maybe you're asking yourself too many questions or mixing too many things at once.
It's good to ask questions, but we can't have all the answers.
So sometimes it's better to think about other things and be in the present moment, a bit like the saying 'living day by day.'
Regarding the "morons" problems, I totally agree with you, and it seems to be getting worse in recent years.

One of the big problems is related to the evolution of technology.
Parents don't take care of their children like they used to; they prefer to let them play on the computer, tablet, or mobile phone instead of raising them and teaching them the real foundations of life.

I was raised Catholic by my parents, and I did follow the Bible quite a bit when I was young, but since adolescence, I preferred to have my own experiences.
I'm not a very religious person, but nonetheless, I find that the basic principles of life in the Bible and some explanations are quite accurate. Especially when it comes to the rules of life in general and how to interact with others.

It's the internet. Too much bullshit, too much distraction.

Yes, the current internet is a lot different compared to before, unfortunately there are too many useless things.
It's very easy to waste one's time and come across misinformation or simply see unintelligent things. And it's even worse for children who are in a learning phase, using the internet filled with harmful content.
I have 5 childrens, and 3 of them are now adults, so I understand very well the problems with today's internet. Nevertheless, I am proud to have successfully taught them the right values in life in general. I am also glad that they didn't take my adolescence as an example because I wasn't exactly a role model at that age.

Doesn't that sound a bit like Frankenstein?

Yes, it does resemble Frankenstein to some extent, but it's what they have wanted to do for a long time.
They have been trying for a long time to create a human being without a mother, and I believe they have recently succeeded, or at least they have evidence that it works.
https://en.wikipedia.org/wiki/Wetware_computer
https://en.wikipedia.org/wiki/Organic_computing
https://blog.richardvanhooijdonk.com/en/how-close-are-we-to-organic-computers/

About aliens

I believe in extraterrestrials as much as I believe in humanity and animals.

I didn't use the word "consciousness" correctly in my previous messages.
When I mention consciousness, I include the soul and the essence of who we are inside.
I believe that this soul needs the frequencies and vibrations of the universe to function properly, just as there is an interaction between the moon, the Earth, the tides, and certain animals.
Certainly, the moon also has an impact on us, but humans, with our evolution, seem to have forgotten certain animal instincts, such as the sense animals use to find their way.
The more humans evolve and invent technology, the more it seems like they are losing their animal instincts and more.

In a way, the Earth and humans are composed of stardust, and through evolution, something allowed life to emerge on an extremely small scale. Over time, there has been an evolution on Earth with nearly infinite resources, which led to the evolution of animals and insects, becoming larger and larger. Then, there were events that led to their disappearance, and evolution continued its course. However, something changed in the resources or other factors because currently, humans and animals don't seem to grow as much as before. It's as if we are in a reverse cycle, trending towards a smaller size. These are just simple hypotheses.

In my opinion, if extraterrestrials exist, they are composed of the same "seeds" as us and animals. However, it is highly unlikely that they would resemble us. But if, as you say, they are artificial intelligences, I believe they were once living beings.

To be honest, I don't think we really interest them. And if they exist as living beings, they would need a galaxy similar to ours, with a sun. All galaxies with a sun are at astronomical distances, so before making contact with us, it is likely that they would make contact with many other galaxies before.

In my view, they could be some form of animals or beings similar to humans, much like on Earth. However, it is difficult to say if they have evolved with consciousness and other characteristics similar to ours.

Can it be you are so self assured to be the only potential smart thing in the universe, because those aliens didn't yet land to proof you wrong?

I agree that it is challenging to be 100% certain, and there may be numerous intelligent entities unaware of their own intelligence. On Earth, even determining the intelligence of plants is difficult, but it is highly probable that they possess some form of intelligence. Similarly, insects like ants, despite their small size, exhibit more intelligence than commonly believed. They possess consciousness and exhibit fear when faced with potential harm. Additionally, they have their unique methods of communication.

Intelligence is a crucial aspect for the survival of all living beings, and it likely plays a significant role in the process of evolution. However, it is important to recognize that intelligence can take diverse forms and may not always align with human understanding or capabilities.

How the hell are you gonna proof that this AI is NOT conscious? It can empathize better than many humans.

Yes and no, it also depends on what it has learned and how it interprets it. To avoid making mistakes with us, the AI would almost have to be an exact copy of humans to have the same kinds of emotions and consciousness.

For instance, if an AI experiences fear, it would be an emulation of fear—it may resemble fear, but it would still be an emulation. In my life, I have rarely seen emulations that are better than the original.

The problem, and what misleads many people, is that AI is very fast, but often its tasks are much simpler than what a human brain needs to accomplish to survive and control its body. AI can fully concentrate on the specific problem it is asked to solve, whereas the human brain will never be 100% focused on a single task.

All living beings are equal; they experience birth and they experience death. Therefore, in my opinion, it is better not to overly concern ourselves with things that do not exist at the moment or for which we have no evidence of their existence.

If you are living well and do not need more, I believe it is best to enjoy life to the fullest. However, if you do feel the need for improvement, strive to make your life better. This does not prevent us from trying to improve the lives of others when our own is going well.

Regarding fears, as I mentioned before, it would be preferable to work towards raising awareness about the importance of strict regulations concerning AI and the aspects of transhumanism related to AI. It is indeed true that if it remains solely in the hands of unreliable entities like certain large corporations, things could quickly deteriorate.

But it is important to live our lives and not solely focus on the future or how things will be, because ultimately, the future does not belong to us. It is our children and future generations who will shape it in their own way, just as previous generations have done.

As I mentioned before, it is possible that one day humans will no longer need to work to meet their needs. They will work for themselves and their own lives, utilizing their lands, houses, or gardens without the need for money. This would allow them to live their lives and fully enjoy each moment. However, it would likely require a reduction in population at some point. With the rapidly increasing number of people on Earth, it would be almost impossible for everyone to have their own land and house. But it is not necessarily what everyone desires.

The important thing for me is to learn more and love what life has to offer, and to learn to live and love nature and everything that comes with it. Furthermore, I want to live my life and enjoy the good moments it brings, because life is still very short at my age. When we're young, it seems infinite and long, but that's really not the case.

Have a good night too.
You search a nice physics solution, if you can read this message you're at the good place :wink:
OrionX3D Projects & Demos:
https://orionx3d.sytes.net
https://www.facebook.com/dave.gravel1
https://www.youtube.com/user/EvadLevarg/videos
User avatar
Dave Gravel
 
Posts: 800
Joined: Sat Apr 01, 2006 9:31 pm
Location: Quebec in Canada.

Re: About Physics modeling.

Postby JoeJ » Thu Jul 06, 2023 6:43 am

One of the big problems is related to the evolution of technology.
Parents don't take care of their children like they used to; they prefer to let them play on the computer, tablet, or mobile phone instead of raising them and teaching them the real foundations of life.


Oh yes, and i have not mentioned how well this fits into my theory of natural life and evolution coming to an end.
I was thinking, maybe all those social changes we observe, like raising kids with tech, or no longer trying to get the girl and make a family at all. Those changes can be seen as a sign of resignation. It is like humanity already feels the end is nigh. It's like them giving up.
So when i've realized that AGI might be possible, it just makes sense to extrapolate this further, speculating it will take our role and replacing us. And we humans would just give up, without much resistance, because we would realize our time is over and we have failed.
So the story i came up with is sad and depressing only from our current perspective. But if we look at it from a larger distance, it is also an optimistic story. Because in the little time our species still had, we managed to spur a new form of existence. And this existence could carry on. It's not alive, mortal or conscious in the way we were, it's different, but it still carries on. It thinks, and so it is.
Thus we have not really failed. But we had to realize that lifetime is finite. Not only as a individual, but also as a species.

It's not a bad story. And it's more intellectual than Terminator. :)
As you've noticed, i'm extrapolating, playing games of thoughts, trying to predict a potential future as far as possible.
I'm not thinking about real life and the current moment.
But i'm not crazy either. I'm just doing what science fiction authors do. Trying to predict what technology might cause, so we already have some philosophy ready when we need it.
And it's entertainment business, so we want to dramaticize.
That just said to defend my mental sanity, which - ahem - might be in doubt. :lol:

Yes, it does resemble Frankenstein to some extent, but it's what they have wanted to do for a long time.
They have been trying for a long time to create a human being without a mother, and I believe they have recently succeeded, or at least they have evidence that it works.

I still need to recover a bit more, before i'm ready to check out those Frankenstein links. :mrgreen:

But i'm sure i could only use this as inspiration for a horror story. It just feels wrong, like raping nature. As a human, i can't accept this. Would be interesting to talk with related researchers on how they can overcome their ethical boundary, and how they justify their work, what they expect to achieve.

Contrary to this, thinking about artificial life based on electronics, or something like spooky quantum mechanics, does not raise such ethical concerns. It's a clean cut, and something truly new and different. From there it's much easier and convenient to predict and speculate, playing games of thought.

But i wonder how subjective this is. If someone would post a warning message about new biologic computers, based on human brain cells, which become super smart and might threat humans due to superior intelligence, i would react with disbelief and i could not overcome my doubt. It's just too horrible to me to take the idea seriously.
So if this would happen for real, i would be an easy victim for the new biotec master race. Because my resistance would build up too late.

I have 5 childrens, and 3 of them are now adults, so I understand very well the problems with today's internet. Nevertheless, I am proud to have successfully taught them the right values in life in general. I am also glad that they didn't take my adolescence as an example because I wasn't exactly a role model at that age.

I have just one son. He's 22 already, but he still does not strive to find a women. He says something like 'That's too exhausting - not yet'. I do not try to urge him, but to me this is unthinkable. Personally i've had no choices in this regard. I constantly fell in love very deeply, and getting the girl was all i wanted. It gave me all the ups and downs and defined my life. I have no idea how somebody can dodge this brutal force of nature. Technology and society may be a factor, but no. I rather think my son is just different in this regard. Maybe it's for good. I'm worried he misses the best things of life, and youth is finite, but on the other hand... i've had really bad experience with women too, so i won't urge him.
And i don't think i'm a great father. It's fine but i'm not great. I was raised in a broken family, and i wish i would have never met my own father. So i lack an example, and felt overtaxed with raising a child. Luckily my wife comes from a big family and is like born for the job. She compensates my shortcomings so well they do not really show up.

Raising 5 kids is very impressive to me! Big respect :)

When I mention consciousness, I include the soul and the essence of who we are inside.

That's clear to me. We just use the term to represent those meta physical things, both of us.

Many believe that 'consciousness' is a requirement for 'true intelligence', but that's just a belief. And this belief is actually a crutch, or a hack, inserted to help explaining of why we can't understand what intelligence actually is, by integrating the supernatural or unknown. It's a typically human and psychological action. But it does not help us with science, and it does not even help with philosophy either. So it's a hack which does not even work, degrading it to just an attempt.
Notice your difficulties with expressing yourself on this topic.
It's difficult because you immediately notice that you not really know what you try to talk about.
So our failure to define those terms is not just a language problem.
Its a real problem, and actually much deeper.
Basically the whole field of philosophy attempts to solve this certain problem.
And there is no hope we might ever get there. No. In terms of philosophy, agreement on 'the way towards the goal' is all we seriously expect to achieve.

I don't think you disagree on this, but i assume you fail to apply this realization to the context of potentially emerging AI.
You fail to see that you beliefs do not apply beyond your personal mind.
You fail at it, because all the other minds surrounding you, including mine, feel the same.

That's a big potential risk for humanity to deal with such potential emerging AI.
In the beginning, it will be like us, so we won't notice the misconception too much.
But our flaw can hinder us to come up with the necessary regulations.
I don't mean regulating who should use AI for what, but the regulations constraining the AI evolution, regarding our advantage and security.

That's why i insist. Assuming consciousness is a criteria is a big mistake, i believe.

What i mean is, in context of my dystopian AI vision, said AI does not require a soul to turn us redundant. If it can do that we could do better, faster, and more effectively, and if it can even solve problems we could not, we loose our superior status.
I mean, strictly from an objectively observable perspective. Or a rational perspective. Maybe that's better to say. (no philosophy involved)
I am worried about how it will feel to us, if we loose this superior status. (<- here philosophy comes back)
We may feel pointless and redundant. We may no longer reproduce and die out.
Thus we may not want to have superior artificial intelligence at all. (Personally i'm not sure.)

Notice the difference between my story to others like Terminator, Matrix, or System Shock.
There is no conflict in my story, no guns, no suppression.
But the outcome is the same: Machines win over humans.
And our consciousness did not help us. It's the contrary: We died out because we consciously felt bad about our life and status.

If that's still not enough, i'll come up with another story for you.
But first let's clarify the tool that i use: The question 'How does it feel?'
I'm no expert on philosophy, but i've observed philosophers use the question as a very powerful tool to deal with deep questions such as 'What is consciousness?', or 'What is it to empathize?'
It just works to build conclusions. For the story above, and also for the next one:

Smart humans invent the steam engine.
It gives them a lot of good things. They can increase life standard with industrialization and globalization. A golden age has started.
A minority of humans gains control and power, e.g. because they own a facility to produce steam engines.
This way they become more wealthy than others, and it feels good to them. The have a better life than others. More wealth, better food, etc.
After a while it turns out burning stuff pollutes the environment. It even causes the climate to change, and threats the existence of humanity.
But the powerful people don't reduce the production of steam machines, because it feels good to be wealthy.
The small people also refuse to consume less goods, because it feels good to have and use those goods. They also say it's the powerful people who are guilty for the pollution, because it feels good if somebody else is the scapegoat.
Thus the pollution continues, until at some point they all die.

Why did they fail?
They failed because they consciously felt good about something.

So why is consciousness, the ability to feel, such a great thing confirming our superiosity?
From a rational perspective, consciousness isn't superior at all.
It's pretty hard to construct a story where consciousness gives us an actual rational advantage.
Feel free to come up with one.
But if you fail at it, consider to adjust the value you assert to consciousness, in relation of a potential competing species which just lacks it.
Even if your consciousness motivates you to not accept the competitor to be an actual 'species', it won't dodge the competition happening regardless.

I believe that this soul needs the frequencies and vibrations of the universe to function properly, just as there is an interaction between the moon, the Earth, the tides, and certain animals.

Reminds me on the 'Gateway Experience'.
But anyway, you still assume that the way your life works rules out any alternatives.
Other forms of life or conscious minds might not need anything similar to what you need.
If you insist on your assumptions, you might miss to identify the alien predator on time, simply because you assume the predator is not alive. Assumptions turn out wrong, and you're dead.

Ok, one more attempt to make my point clear:
Personally i judge people from what they do, not from who they are.
And i would apply the same practice to AI or aliens.
Makes sense, no?

But if, as you say, they are artificial intelligences, I believe they were once living beings.

Yeah, i would agree to that.
But notice it's another form of psychological crutches and hacks we use.
We assume to have something in common with those former living beings, e.g. the common source of stardust, and some magic nature force to evolve life.
That's just stacking up assumptions and beliefs. We could just say we know nothing about potential alien life, so it could be anything. And anything would include this later artificial intelligence being too, so it could have been the start as well.

On Earth, even determining the intelligence of plants is difficult, but it is highly probable that they possess some form of intelligence.

There are some cases which make me doubt Darwins evolution theory.
E.g. there is this fungus. Ants get infected by the fungus, then the fungus changes something in their brain to make the ants walk up a tree to a place where the fungus can grow well.
I think the ant than just sits there and dies, while the fungus grows.

I wonder, how can a more primate lifeform develop a way to control a more advanced one?
This feels impossible, like the perpetual motion example.
So i could ditch Darwins theory in favor of religion, which explains this well simply by a superior god who controls everything.
Which then would mean god is needed, and i don't have to worry about humans making superior AI by competing god.

But there are not enough such examples. I can also explain the fungus with random luck of evolution trying all kinds of randomized mutations. Coincidence caused ability to take over the ant brain.
The fact of such phenomena being very rare helps confirming Darwin.

For instance, if an AI experiences fear, it would be an emulation of fear—it may resemble fear, but it would still be an emulation. In my life, I have rarely seen emulations that are better than the original.

My Atari emulator is just as crappy as the real Atari. Can't tell the difference.
It makes a difference if fear is 'emulated' only to the AI, but not to us.
This implies it also does not difference to the world as a whole, which for the most part surrounds the AI, but isn't the AI.
But yeah - that's just a fancy way to say the obvious.
Just don't nitpick on the emulation aspect, since it won't matter to you.

The problem, and what misleads many people, is that AI is very fast, but often its tasks are much simpler than what a human brain needs to accomplish to survive and control its body. AI can fully concentrate on the specific problem it is asked to solve, whereas the human brain will never be 100% focused on a single task.

I see you already construct excuses about your inferiority. Maybe i should do the same. To feel better about it. ;)

But it is important to live our lives and not solely focus on the future or how things will be, because ultimately, the future does not belong to us. It is our children and future generations who will shape it in their own way, just as previous generations have done.

'the future does not belong to us.'
That's a really good one. I'll steal it and use it in my game. :D
But if it does not belong to us, which is so true, then we are responsible to not ruin it for our children.
On the other hand, then we are not allowed to change the future at all, but doing nothing also changes the future.
So what should i do?? What am i allowed to do??? Panic!!! /:O\


Haha, well... that was a lot of bullshit and fun.
I feel guilty to be responsible for a need of another new balancing thread, after derailing this one so badly.
Sorry for that, but aside family i'm quite isolated. And 'it feels good' to talk with other people who pay actual attention on the subject. I needed more samples.
It really helped to settle back to a more relaxed mindset. Thanks for that, guys.

One last game of thought to illustrate the difference of what i think and feel:
If you ask me: 'Would you feel bad about turning off and erasing ChatGPT?'
My answer is no. So i don't believe it is conscious or alive, and turning it off would be no murder or ethical issue. I also don't and didn't believe this to change anytime soon.
Contrary, i do spend a lot of effort if needed to avoid killing a bug.

That just said to avoid a similar fate as the 'sentient' Google researcher. ;)
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

Re: About Physics modeling.

Postby Julio Jerez » Thu Jul 06, 2023 2:27 pm

If you ask me: 'Would you feel bad about turning off and erasing ChatGPT?'

no in fact, I say it is criminal no to do so.
and that people releasing that into the public are as guilted as someone fitting a war with Chemical or biological weapon and should be prosecuted with crime again humanity charges.

It is not chat GTP, or any of the large languages models that are bad,
I have a very good idea how the work and what the do, they are no alive, or sentient, or any of those nonsenses, they are tools.
it is the people who are using as a weapon, for short tune benefic who are bad.

you may think ChatGPT is helping you because, you post a question and generated a reasonably response. Maybe a bad poem, and a code script, or so bad art.
and It may seem they price you paid for the commodity is nothing or very small.

but in return Goggle, MS, apple and the other are placing ads in you search that are far more target that they were before. You simply pay the by clicking.
Mean time we are seeing the polarization and trivialization of the culture.

I am not the only one thinking this way:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Most people on that list are very serios, but there are few Charlatans like Elon Mosh, who think he is so smart that he is buying time.

I signed that list.

do no get me wrong, I am not against the re enforcement learning. I am against the nefarious motived for this giant tech advertising companies that are using as a weapon again the same people that keep then relevant.
Julio Jerez
Moderator
Moderator
 
Posts: 12249
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: About Physics modeling.

Postby JoeJ » Thu Jul 06, 2023 5:27 pm

Well, i agree.
But for some reasons i have almost no hope regulations will happen in time.
It's like human nature that we keep failing on those things again and again, and we do not learn.
No matter what we invent, we are never ready for it. But we can't wait, nor can we get ready.

That's pessimistic, but i can't help it here.
Personally i'll just not use ChatGPT. I would not know for what anyway.
I also hope it turns out others don't use it either, for the same reason: They don't need it.
I do not see the application of the tool. It's capable, but useless.
Ideally, AI assistance is just another product of the tech industry they try to sell us, but then nobody wants it.

But sadly my hopes likely turn out naive and won't come true.
It's a downward spiral. Assistance, beside other factors, will make people less capable themselves, and they will depend on AI more and more to compensate their incompetence, but still fulfill raising expectations put on them.

In another forum i've recently said 'The next world war will be civil war against tech companies', and that's the reason.

But i'll sign the letter too. Maybe it helps.

I'm not against ML either.
I think options should be explored, and the inventor is not responsible about the causes from his invention. Society is responsible.
But maybe i should change my mind, after realizing society lacks the ability.
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

Re: About Physics modeling.

Postby Dave Gravel » Fri Jul 07, 2023 3:46 pm

Oh yes, and i have not mentioned how well this fits into my theory of natural life and evolution coming to an end. It is like humanity already feels the end is nigh.

I don't think it's coming to an end very soon. While it may become more challenging to maintain the same quality of life, I don't believe it's imminent.

When I was a pre-teen, I watched those videos on tv and some others and they somewhat encouraged me to do bad things and go down the wrong path. I already had the impression that there wasn't much to do on Earth before a major catastrophe occurred. It's really unfortunate that I can't find an English version of those two videos.

I think that the YouTube translation is good enough to translate it back into English and understand both videos.
https://www.youtube.com/watch?v=sGos9V_zIjM
https://www.youtube.com/watch?v=kjSp72_F76I
Luckily, I managed to get back on the right track in my early adulthood.

About me

There are still several places on Earth where people are not really interested in technology and where they still value family bonds. Personally, I allow my children to use technology, but I try to explain to them the best I can what I understand about life and that there are good and bad people.

When it comes to my children, my approach to explain life is somewhat inspired by Joseph Campbell's work. I haven't found the English version, but it's surely possible to find it online. Of course, it all depends on how you interpret it. Even though it's not always easy with children, it generally works quite well. And yes, for boys, it's a bit more difficult to find a good girl. Money and the fact that girls often have more opportunities to make money, such as on sites like OnlyF..s or through similar means, can be really discouraging for young teenage boys who struggle to find a simple job and earn some money.
( When I say this about women, I don't want to generalize, but nowadays, when they lose trust in men or encounter problems in life, some of them opt for easy solutions. I use the term 'easy,' but there's nothing easy in life. )
https://fr.wikipedia.org/wiki/Voyage_du_h%C3%A9ros
Ep. 3
https://www.youtube.com/watch?v=Ij5cJtYLkvE
You can find all six episodes on YouTube.

I spend a lot of time with my childrens to explain things about life and nature, and I take walks with them in the forest or in the city. By spending time with them, I've come to notice moments when they seem to feel less well, and I don't let things turn negative. I try to find the best way to help them overcome the problem, and generally, it works. There have been tougher times, and there will surely be more to come, but for me, these are the normal challenges of life.

I got my first computer when I was 13 years old, and now I am 46 years old. I started working with computers at around the same age.
If you would like to discuss more personal matters in private, I am always open to having a conversation.

Regarding ants being attacked by parasitic fungi, it is possible that something similar has occurred in humans. It might be one of the reasons why our evolution differs from that of animals. Over time, this parasite has managed to evolve and influence our own evolution. It is even possible that this parasite has an extraterrestrial origin, but that remains purely speculative. There is a strong chance that our human evolution is something similar. It is possible that our bodies are already being controlled by ancient parasitic organisms since time immemorial. And the ultimate goal of this parasite is perhaps precisely to evolve even further in order to ultimately return home :mrgreen: Nevertheless, given where we are now, it is better to accept who we are and strive to lead a good life, feel well, and learn to know ourselves from within. And it is important to do our best to make the future of our children better, while also allowing them to shape it themselves without trying to choose everything for them.

Regarding ChatGPT, I have a similar opinion to Julio.
Regarding Elon Mosk, yes, he does seem like a charlatan. Almost all the companies and projects he has been involved with were originated by someone else before him, and then he would buy them and put his name everywhere as if it all came from him. It reminds me of certain programmers who use other people's code and then claim that they did everything and came up with the whole thing, but they forget who wrote the underlying code... I wonder if Mosk would truly be capable of writing a basic game like Pong, but that doesn't prevent them from having good ideas sometimes and being rich...
Last edited by Dave Gravel on Tue Nov 07, 2023 2:44 am, edited 1 time in total.
You search a nice physics solution, if you can read this message you're at the good place :wink:
OrionX3D Projects & Demos:
https://orionx3d.sytes.net
https://www.facebook.com/dave.gravel1
https://www.youtube.com/user/EvadLevarg/videos
User avatar
Dave Gravel
 
Posts: 800
Joined: Sat Apr 01, 2006 9:31 pm
Location: Quebec in Canada.

Re: About Physics modeling.

Postby Julio Jerez » Wed Jul 12, 2023 2:03 pm

So I now implemented the Deep Deterministic Policy Gradient method(DDPG)
And I am testing it with the same model.

Let it run last night and is was an total miss bag.

The learning when up and down, really erratic.
At half million frames of simulations, it shows sign of learning, but it seems that a have to let it run for about 2 million frames

After tweaking few things, and fixing some big, I am letting run again, to see what I get tonight.

In any case the importance of this DDPG method, is that it is the base for more sophisticated methods like tween delayed policy gradients TD3 or soft actor critic SAC.

I will go for TD3 over the weekend, and see if it better.

In the paper, they say that DDOG method is a extreme high variance and hyper parameters sensitive.
So maybe that's why.
But it does render it practically useless for far more complex problems.

Anyway, the good thing is that the implementation seem to work, and is not too slow.
In fact it seems to moch faster that the python version they use in the paper and many of the demos that I seen, that run in gpu and go for many hours.

Anyway speed is not the concern now, convergence seem to be the Achilles heel of these methods.
Julio Jerez
Moderator
Moderator
 
Posts: 12249
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: About Physics modeling.

Postby Julio Jerez » Wed Jul 12, 2023 2:19 pm

https://www.mathworks.com/help/reinforc ... gents.html

Here is an explanation of the behavior that I see.

Basically seem DDPg, try to optimize a rewards.
It iassy to get caught in a local minimal, and pushing action that make that reward keep growing

Part of the reason, is that the action exploration is generated by adding a small amount of noise to the predicted actions.
But since the noise is either Brownian or Gaussian with a small variance, what happens is that it tends to produce trajectories very early during the training and these trajectories totally skew the training of the critic network, until the value can not go any higher.
Then it starts to explore states with lower value in order to improve the bad trajectories.

That make the method very unstable, because these bad trajectories can go for a long time.

So the TD3 method is quite clever, it train two trajectories at the same time but set with different initial conditions.
Then it always select the one with lower value to optimize.

As the, trajectories generates higher values, at some point surpasses the other trajectory, and a that point switched to optimizing the the secund one.
That sounds very clever trick to try.
Julio Jerez
Moderator
Moderator
 
Posts: 12249
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: About Physics modeling.

Postby JoeJ » Fri Jul 14, 2023 3:02 am

How do you plan to do the training in practice?
Is it meant to be done offline, e.g. training various characters, saving NN to disk then treat it like an asset?
Or could the training happen at runtime for each character individually?
Or a combination of both, eventually causing character abilities to improve while playing, but making it difficult to design the game?
User avatar
JoeJ
 
Posts: 1453
Joined: Tue Dec 21, 2010 6:18 pm

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 51 guests