Car Chat General discussion about Lexus, other auto manufacturers and automotive news.

Self-Driving Vehicles

Thread Tools
 
Search this Thread
 
Old 03-21-18, 08:35 AM
  #661  
bitkahuna
Lexus Fanatic
iTrader: (20)
 
bitkahuna's Avatar
 
Join Date: Feb 2001
Location: Present
Posts: 75,261
Received 2,509 Likes on 1,649 Posts
Default

re: jeopardy
Originally Posted by Och
Glad you brought this up. Computer has access to terabytes of information, much more than a human mind can possibly memorize, and yet the computer still came up with some wrong answers - not because it didn't have the information needed to provide the right answer, but because the AI couldn't properly interpret the questions and came up with wrong answers. Human players have no problem interpreting questions, but lack information to always answer them correctly.
regardless it was still better than the best players who have ever played jeopardy.

If we apply this scenario to self driving cars, it would be equivalent to the car incorrectly interpreting a situation on the road and making a wrong decision, which can lead to an accident. If you read up on AI and machine learning, its problems and limitations, you will quickly realize how bad it can become.
there will be challenges for sure, and the jeopardy to self-driving car analogy doesn't really work because on jeopardy the computer isn't really risking much to take its best shot at answering (as long as its average accuracy is better than the others).

Originally Posted by Och
You're assuming that the data collected from the sensors is used in machine learning and affects decision making during future tasks, but I doubt very much this is how it actually works in current autonomous cars. I am sure that the main algorithms are static, coded only react to real life data, and not affected by any data that was collected in the past. This is not machine learning.
volvo for example has said their cars learn from prior human driven and self driving experience and through communications they can learn from the other cars experiences too. obviously due to communication speeds the car won't be making real time decisions by consulting cloud servers but it can continually update itself with better and better data asynchronously (offline).

all self-driving cars will be cloud connected and capable of continuous improvement.

I am also sure that this data is also stored in a separate module, to be later used in machine learning for research purposes in a simulation, to observe how the car would react to situations on the road based on what it learned, but this tech is way to fresh to allow it to affect decision making of the actual car on public roads. If you're such an expert on machine learning you should know that it is data hungry, and it is not transparent how that data is being processed, and why the machine comes up with certain decisions. This tech is never anywhere near 100% and already hitting a brick wall in much simpler tasks that do not the amount of scenarios of real world driving. It's hit a brick wall in image, speech and writing recognition already. Think about every time a speech recognition system fails to recognize speech correctly - for a self driving car that equates to failing to asses a road situation correctly and making a wrong decision, and that can be a life or death scenario. Do you think any company is reckless enough to allow it? Not to mention that even when those systems come up with the right decisions, it is not transparent why the reached the decisions, and upon review it is often revealed that the machine happened to came with the right answer, but didn't realize enough or even the right parameters.
well clearly many companies are a) investing billions each in this, b) already have a combined thousands of test cars on real roads, and c) have publicly announced plans and dates for release as soon as next year. so yeah i guess some are 'reckless enough to allow it'.

there will be setbacks and steps forward, legislative battles, proponents and naysayers like you, but it's unstoppable.

Originally Posted by Johnhav430
A goaltender is using a combination of what's happening, and what has happened in the past, individualized to the shooters ability and what he may or may not do at this instant. Sensors cannot provide such info.
yes sensors can only asses the present but that's not all that's going on... the car CAN interpret the sensors for appropriate action based on ruled derived from millions of miles driven by all self-driving cars and scenarios.

edit: Can an autonomous vehicle make eye contact? Do they have firm handshakes?
who cares? let me give a different example of machine learning... google didn't 'teach' its language translation system all the rules of grammar and vocabulary for dozens of languages. instead it fed their learning algorithms millions of documents (like boring government ones) ALREADY TRANSLATED into dozens of languages and let the computer figure out what translates to what without actually understanding the meaning at all. that's an approach humans can't relate to but on computers, it works. is it perfect? no, is it pretty darned good and getting better all the time? yes.

oh and how about siri, alex, google now, etc.? can you name the capital and population of every country on earth? no, but your phone or other device can, easily.

Originally Posted by Johnhav430
So what are the autonomous vehicles' capabilities then, when it is driving along at 30 mph, and two vehicles cross its path unlawfully? I would love to see an autonomous vehicle navigate the GW Bridge during rush hour. If I were an autonomous vehicle mfg., I would do it, and upload it to YouTube. Getting cutoff unlawfully and unexpectedly is actually part of the normal traffic flow. Having a pedestrian dart out of nowhere on Queens Blvd. and Roosevelt Blvd. are as well. Again, no excuse for any fatalities.
yes a self driving car should be able to handle unlawful driving by others. a pedestrian 'darting out of nowhere' cannot always be avoided, by human or computer.
bitkahuna is offline  
Old 03-21-18, 09:15 AM
  #662  
Och
Lexus Champion
iTrader: (3)
 
Och's Avatar
 
Join Date: Feb 2003
Location: NY
Posts: 16,436
Likes: 0
Received 14 Likes on 13 Posts
Default

Originally Posted by bitkahuna
oh and how about siri, alex, google now, etc.? can you name the capital and population of every country on earth? no, but your phone or other device can, easily.
Those are relatively straight forward tasks, and not really AI approaches.

Originally Posted by bitkahuna
who cares? let me give a different example of machine learning... google didn't 'teach' its language translation system all the rules of grammar and vocabulary for dozens of languages. instead it fed their learning algorithms millions of documents (like boring government ones) ALREADY TRANSLATED into dozens of languages and let the computer figure out what translates to what without actually understanding the meaning at all. that's an approach humans can't relate to but on computers, it works. is it perfect? no, is it pretty darned good and getting better all the time? yes.
These systems are not that good, and not necessarily getting much better, in fact they are starting to hit a brick wall. Low hanging fruit has been picked up, and the last 25% is infinitely more difficult to perfect than the first 75%. In many of these examples they reached the point of diminishing returns, and for further advances they need exponentially more data, which in turn requires a lot more processing power, and improvements are only marginal if any.

This tech can be used to trivial tasks, such as translating non critical documents, recognizing non critical speech, etc - but nobody would trust it in medical, law or financial field without human review. Same is true for autonomous cars.
Och is offline  
Old 03-21-18, 04:16 PM
  #663  
Dave600hL
Lexus Champion
 
Dave600hL's Avatar
 
Join Date: Feb 2008
Location: Japan
Posts: 2,448
Received 2 Likes on 2 Posts
Default

Originally Posted by Och
You're assuming that the data collected from the sensors is used in machine learning and affects decision making during future tasks, but I doubt very much this is how it actually works in current autonomous cars. I am sure that the main algorithms are static, coded only react to real life data, and not affected by any data that was collected in the past. This is not machine learning.

I am also sure that this data is also stored in a separate module, to be later used in machine learning for research purposes in a simulation, to observe how the car would react to situations on the road based on what it learned, but this tech is way to fresh to allow it to affect decision making of the actual car on public roads. If you're such an expert on machine learning you should know that it is data hungry, and it is not transparent how that data is being processed, and why the machine comes up with certain decisions. This tech is never anywhere near 100% and already hitting a brick wall in much simpler tasks that do not the amount of scenarios of real world driving. It's hit a brick wall in image, speech and writing recognition already. Think about every time a speech recognition system fails to recognize speech correctly - for a self driving car that equates to failing to asses a road situation correctly and making a wrong decision, and that can be a life or death scenario. Do you think any company is reckless enough to allow it? Not to mention that even when those systems come up with the right decisions, it is not transparent why the reached the decisions, and upon review it is often revealed that the machine happened to came with the right answer, but didn't realize enough or even the right parameters.
How many different ways do I have to spell this out to you? I am NOT assuming, if you knew how ML works then you would know that ML is used to affect decisions in REAL TIME as Sebastian Thrun from Google has already stated that Supervised Learning is used in Google Car.

Supervised Learning
From Wiki,
"In order to solve a given problem of supervised learning, one has to perform the following steps:
  1. Determine the type of training examples. Before doing anything else, the user should decide what kind of data is to be used as a training set. In case of handwriting analysis, for example, this might be a single handwritten character, an entire handwritten word, or an entire line of handwriting.
  2. Gather a training set. The training set needs to be representative of the real-world use of the function. Thus, a set of input objects is gathered and corresponding outputs are also gathered, either from human experts or from measurements.
  3. Determine the input feature representation of the learned function. The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object. The number of features should not be too large, because of the curse of dimensionality; but should contain enough information to accurately predict the output.
  4. Determine the structure of the learned function and corresponding learning algorithm. For example, the engineer may choose to use support vector machines or decision trees.
  5. Complete the design. Run the learning algorithm on the gathered training set. Some supervised learning algorithms require the user to determine certain control parameters. These parameters may be adjusted by optimizing performance on a subset (called a validation set) of the training set, or via cross-validation.
  6. Evaluate the accuracy of the learned function. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set."
And it is obvious you don't understand that in order for Supervised Learning to be effective it also needs to use Unsupervised Learning and Reinforcement Learning at some level to be really effective. So , in real time applications, the autonomous vehicle program is constantly evaluating its surrounds and using ML to to build an image based model for object detection and prediction, this is Supervised ML. What do you think is happening?

You keep saying, "I am sure" and "Do you think?" which means you have no idea what is going on here. I have had enough, you have no idea what you are talking about and I am not going to entertain your lack of ability to understand this even though I have spelled it out to you. You keep saying I am assuming, but I have given names of high profile people in the area with links on ML and what have you given. An artiucle that just states this tech has yet to come to production cars, even though it is in and working in non production cars. You also keep saying that ML is dying, but each time I ask for where this proof is, you go back to "I am sure" and "Do you think?". Where is this proof?

Last edited by Dave600hL; 03-21-18 at 04:28 PM.
Dave600hL is offline  
Old 03-21-18, 04:26 PM
  #664  
Och
Lexus Champion
iTrader: (3)
 
Och's Avatar
 
Join Date: Feb 2003
Location: NY
Posts: 16,436
Likes: 0
Received 14 Likes on 13 Posts
Default

Originally Posted by Dave600hL
How many different ways do I have to spell this out to you? I am NOT assuming, if you knew how ML works then you would know that ML is used to affect decisions in REAL TIME as Sebastian Thrun from Google has already stated that Supervised Learning is used in Google Car.

Supervised Learning
From Wiki,
"In order to solve a given problem of supervised learning, one has to perform the following steps:
  1. Determine the type of training examples. Before doing anything else, the user should decide what kind of data is to be used as a training set. In case of handwriting analysis, for example, this might be a single handwritten character, an entire handwritten word, or an entire line of handwriting.
  2. Gather a training set. The training set needs to be representative of the real-world use of the function. Thus, a set of input objects is gathered and corresponding outputs are also gathered, either from human experts or from measurements.
  3. Determine the input feature representation of the learned function. The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object. The number of features should not be too large, because of the curse of dimensionality; but should contain enough information to accurately predict the output.
  4. Determine the structure of the learned function and corresponding learning algorithm. For example, the engineer may choose to use support vector machines or decision trees.
  5. Complete the design. Run the learning algorithm on the gathered training set. Some supervised learning algorithms require the user to determine certain control parameters. These parameters may be adjusted by optimizing performance on a subset (called a validation set) of the training set, or via cross-validation.
  6. Evaluate the accuracy of the learned function. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set."

And it is obvious you don't understand that in order for Supervised Learning to be effective it also needs to useUnsupervised Learning and Reinforcement Learning at some level to be really effective.

You keep saying, "I am sure" and "Do you think?" which means you have no idea what is going on here. I have had enough, you have no idea what you are talking about and I am not going to entertain your lack of ability to understand this even though I have spelled it out to you. You keep saying I am assuming, but I have given names of high profile people in the area with links on ML and what have you given. An artiucle that just states this tech has yet to come to production cars, even though it is in and working in non production cars. You also keep saying that ML is dying, but each time I ask for where this proof is, you go back to "I am sure" and "Do you think?". Where is this proof?
I commend your "googling" skills. Its pointless to argue with you, you're stuck in whatever you want to believe.
Och is offline  
Old 03-21-18, 05:03 PM
  #665  
MattyG
Lexus Champion
 
MattyG's Avatar
 
Join Date: Jul 2013
Location: RightHere
Posts: 2,300
Received 4 Likes on 4 Posts
Default

The more basic questions that the NTSB and NHTSA are going to be pouring over are whether all the sensors were clean with no contamination. Did the LIDAR and the cameras "see" the pedestrian? She was walking her bike across these lanes so that part should have been easy to detect and combined with some sort collision mitigation auto-brake. Even if she was in the bushes in the median LIDAR could have picked her up as something other than a fire hydrant or a bush.

This is a really good explanation of Uber's systems.

https://techcrunch.com/2018/03/19/he...t-pedestrians/
MattyG is offline  
Old 03-21-18, 05:05 PM
  #666  
spwolf
Lexus Champion
 
spwolf's Avatar
 
Join Date: Jan 2005
Posts: 19,927
Received 161 Likes on 119 Posts
Default

Originally Posted by Och
You're assuming that the data collected from the sensors is used in machine learning and affects decision making during future tasks, but I doubt very much this is how it actually works in current autonomous cars. I am sure that the main algorithms are static, coded only react to real life data, and not affected by any data that was collected in the past. This is not machine learning.
ML is not that complicated and it has been used for a long time now. If you are doing any kind of algorithm today, you will likely use ML to make it better, to analyze more data, faster and create a better algo. It is not AI though, and it is nothing out of ordinary really.
spwolf is offline  
Old 03-21-18, 05:09 PM
  #667  
spwolf
Lexus Champion
 
spwolf's Avatar
 
Join Date: Jan 2005
Posts: 19,927
Received 161 Likes on 119 Posts
Default

Video is out:
https://www.axios.com/uber-self-driv...85e12a145.html

It is clearly an uber software issue and safety driver was simply not attentive at the time.
This is where sensors should have detected the person on the street.

Uber is going to lose a lot of money over this.

Most safety systems out there today, with way lesser sensors would have attempted to brake at some point...
spwolf is offline  
Old 03-21-18, 05:16 PM
  #668  
Och
Lexus Champion
iTrader: (3)
 
Och's Avatar
 
Join Date: Feb 2003
Location: NY
Posts: 16,436
Likes: 0
Received 14 Likes on 13 Posts
Default

Originally Posted by spwolf
Video is out:
https://www.axios.com/uber-self-driv...85e12a145.html

It is clearly an uber software issue and safety driver was simply not attentive at the time.
This is where sensors should have detected the person on the street.

Uber is going to lose a lot of money over this.

Most safety systems out there today, with way lesser sensors would have attempted to brake at some point...
Holy ####!!!

Lawsuits are going to be crazy.
Och is offline  
Old 03-21-18, 05:23 PM
  #669  
Mike728
Lead Lap
 
Mike728's Avatar
 
Join Date: Mar 2013
Location: IL
Posts: 4,847
Received 684 Likes on 507 Posts
Default

Hard to tell, but it looks pitch black out and she's crossing in the middle of nowhere. If true, I would bet she stood a 50/50 chance of being hit by an attentive driver in a non autonomous vehicle.
Mike728 is offline  
Old 03-21-18, 05:27 PM
  #670  
Hoovey689
Moderator
Forum Moderator
iTrader: (16)
 
Hoovey689's Avatar
 
Join Date: Oct 2008
Location: California
Posts: 42,312
Received 126 Likes on 84 Posts
Default

Wow that video is so sad. The car is just barreled right through. Pre-Collision (assuming its on the car or Ubers system) didn't react whatsoever. PCS doesn't always prevent crashes, but mitigates them, reducing them the best it can, which maybe could have saved a life. What about the headlights? Did they seem standard, looked a bit low and cut off but hard to tell. Auto high-beams would have helped for those long dark stretches, illuminating more of the surroundings.
Hoovey689 is offline  
Old 03-21-18, 05:27 PM
  #671  
Och
Lexus Champion
iTrader: (3)
 
Och's Avatar
 
Join Date: Feb 2003
Location: NY
Posts: 16,436
Likes: 0
Received 14 Likes on 13 Posts
Default

I can't believe that Uber a) trusted their autonomous vehicle that costs hundreds of thousands to be driven by the hillbilly in the video and b) that with all their autonomous equipment, the quality of the video shot by their onboard cameras is worse than $60 ebay dashcam.
Och is offline  
Old 03-21-18, 05:43 PM
  #672  
MattyG
Lexus Champion
 
MattyG's Avatar
 
Join Date: Jul 2013
Location: RightHere
Posts: 2,300
Received 4 Likes on 4 Posts
Default

That is very sad to see and it's a tragedy when you consider that she died this way. She emerged out of the shadows and there was no way any kind of braking system was going to stop that car in time. Prior to that though, her even being out on the road; the LIDAR should have alerted the driver and it should have started engaging the automatic emergency stop system well before the Uber got to where she was.

This is going to be big time because as people may recall, Volvo and Uber signed a deal for 24K XC90 autonomous vehicles. Now that is in doubt until the engineers, NHTSA and the NTSB figure things out. If they find a flaw, the NTSB can "ground" these vehicles by issuing an order until things get sorted out.
MattyG is offline  
Old 03-21-18, 05:53 PM
  #673  
spwolf
Lexus Champion
 
spwolf's Avatar
 
Join Date: Jan 2005
Posts: 19,927
Received 161 Likes on 119 Posts
Default

Originally Posted by MattyG
That is very sad to see and it's a tragedy when you consider that she died this way. She emerged out of the shadows and there was no way any kind of braking system was going to stop that car in time. Prior to that though, her even being out on the road; the LIDAR should have alerted the driver and it should have started engaging the automatic emergency stop system well before the Uber got to where she was.

This is going to be big time because as people may recall, Volvo and Uber signed a deal for 24K XC90 autonomous vehicles. Now that is in doubt until the engineers, NHTSA and the NTSB figure things out. If they find a flaw, the NTSB can "ground" these vehicles by issuing an order until things get sorted out.
she was in the middle of the road... lidar sees better in the dark and radar sees the same in the dark or light... only cheapest camera systems see worse in the dark (like one on Yaris), but even then they would react at certain point.

It is a pure software issue and why is that surprising? Cars, phones, laptops and everything else today breaks down every day. Why would autonomous system in finalized version work perfectly, let alone during testing of technology? Obviously safeguards in this case are not placed there, and this is not surprising - reason car companies are slow is because they take things more seriously and test things too much to be fast... technology startups do not, they move fast and safeguards are least of their worries.
spwolf is offline  
Old 03-21-18, 05:57 PM
  #674  
Och
Lexus Champion
iTrader: (3)
 
Och's Avatar
 
Join Date: Feb 2003
Location: NY
Posts: 16,436
Likes: 0
Received 14 Likes on 13 Posts
Default

Originally Posted by MattyG
That is very sad to see and it's a tragedy when you consider that she died this way. She emerged out of the shadows and there was no way any kind of braking system was going to stop that car in time. Prior to that though, her even being out on the road; the LIDAR should have alerted the driver and it should have started engaging the automatic emergency stop system well before the Uber got to where she was.
She didn't emerge out of shadows, its the lack of contrast in the video makes it seem this way. You can see many street lights in the video, so the street was very well lit. The cars speed wasnt very fast either, only 38mph. And the pedestrian didn't just dart out of nowhere like originally claimed - she was actually crossing slowly and there was plenty of time for an attentive human driver to see her and react appropriately.
Och is offline  
Old 03-21-18, 06:05 PM
  #675  
Och
Lexus Champion
iTrader: (3)
 
Och's Avatar
 
Join Date: Feb 2003
Location: NY
Posts: 16,436
Likes: 0
Received 14 Likes on 13 Posts
Default

Originally Posted by spwolf
she was in the middle of the road... lidar sees better in the dark and radar sees the same in the dark or light... only cheapest camera systems see worse in the dark (like one on Yaris), but even then they would react at certain point.

It is a pure software issue and why is that surprising? Cars, phones, laptops and everything else today breaks down every day. Why would autonomous system in finalized version work perfectly, let alone during testing of technology? Obviously safeguards in this case are not placed there, and this is not surprising - reason car companies are slow is because they take things more seriously and test things too much to be fast... technology startups do not, they move fast and safeguards are least of their worries.
It did absolutely nothing to react at all - no braking, no swerving, just zero reaction from the car. This is precisely my worry with semi autonomous driving technologies - they will encourage drivers to get distracted and as a result there are going to be more accidents than before those technologies.

A lot of heads are going to roll in court - Uber, volvo, whoever manufactured Lidar, radars, cameras, and whoever developed the software.
Och is offline  


Quick Reply: Self-Driving Vehicles



All times are GMT -7. The time now is 03:58 PM.