Get CPR certified in 30 minutes at CPR Test Center.
Echo & Alexa Forums

Context Awareness & Ability

0 Members and 1 Guest are viewing this topic.

ChrisThomas

Context Awareness & Ability
« on: February 23, 2017, 06:21:37 am »
I have one Alexa Echo shared between my Kitchen / Living room, and then I have 4 more dots around my house. These control a mixture of wireless light bulbs, switches (lightwaveRF), plugs (Hive) and blinds.

Alexa is generally great, but one feature I sorely think it is missing, is the concept of location. It knows its in the town where I live, but really, each Echo and Dot should have a setting for which ROOM it is in. If a room is supplied, then saying "Alexa, turn off the lights" should be internally expanded to "Alexa, turn off the LIVING ROOM lights (If that Echo was set to LIVING ROOM. My Office Dot should hear "Alexa, close blinds"  as "Alexa, close OFFICE blinds"

Contextual awareness is key in our own language skills, and it seems that without Alexa having a true sense of location (context), it cant be as intuitive as it should be.

My second grip is with Discovery of "Devices". Once discovered, via Skills, devices present their various modes to Alexa, sometimes with VERY long names, or names that are just not that appropriate. Also, some of the default behaviours for these devices does not work very well.

Let me give some examples.

I have my Lightwave "Office Blind", it appears in Alexa as such, so the name is ok. However, to use it, I have to say, Alexa, Office blind Offf", rather than "Open", thats not very intuitive?

Another example

LightwaveRF offers "Moods" so I can create a Mood, such as "Spots 100%", for my Kitchen. This appears in Alexa as "Kitchen mood spots 100%", which is pretty hard to remember, let alone say correctly.

So, we need a way to be able to create an Alias for that mood. And we need a way to pick Aliases for the blinds control.In that case, mapping CLOSE -> off, and OPEN -> on. Would make life a lot easier.

Lets combine both wishes. And try and open the blinds

Before: "Alexa, Office Blinds Off"
After: "Alexa, Blinds Close" (or even better) "Alexa, Close Blinds"

And that unwieldy mood
Before: "Alexa, Kitchen mood spots 100%"
After: "Alexa, lights full" (using an alias of "Kitchen lights full", remember, Alexa knows we are in the Kitchen...)

What do you think?

And also, I can complain, or rather suggest this improvement here. But is there an OFFICIAL Alexa forum, where these points might get seen by their QA, Product Manager or Developers?

Cheers

Chris
« Last Edit: February 23, 2017, 06:25:24 am by ChrisThomas »

Offline kevb

  • ****
  • 250
Re: Context Awareness & Ability
« Reply #1 on: February 23, 2017, 07:29:31 am »
This has been discussed many times in this forum. There is a Feedback section in your Alexa app. Click the menu button and look lower left. Good luck!

coyote

Re: Context Awareness & Ability
« Reply #2 on: February 23, 2017, 10:44:13 am »
What you are delineating here should not be complaints, but suggestions.
It's worth remembering that the voice-operated home for the multitudes is really in its infancy. So we cannot expect all the features we want; it will take years, and perhaps decades, for that to occur. I'm amazed that we finally even have HAL available to us this way!
So from there it's a question of priorities. I too would love room-level awareness. But IMO that needs to take a back seat to security, to voice biometrics. Because the thing that will take Alexa from novelty Easter-egg gadget to real useful tool is the ability to identify who is talking to it, and do/don't based on that. "Alexa, PopMoney $1500 to John Smith" or "Alexa, unlock the front door" make eminent sense.... but are only useful if  Alexa can know you are Bob Jones and say "OK Bob, I've transferred blah Blah Blah" and perform the transfer from YOUR account rather than your roommate's account. And will unlock the door at the command of you or your roommate, but NOT at the command of the burglar standing outside the door. and when your wife says "Alexa, manicure appointment Sunday at 2pm" the system puts that on HER calendar and not on yours.

So my priority order, just off the top of my head, for Alexa future capability:
1. security, voice biometrics verification and identification
2. Labeling of alarms and timers. "ALexa, set a 10-minute timer for rice" - and after 10 minutes, "RIce timer is complete BEEP BEEP"
3. PUSH notifications. Allow it to say, out loud, notifications and events that come up on the calendar etc.
4. Room level location capability for smart home operation
5. Voice control over screen usage ("aLexa, page down" "Alexa, zoom in" etc)
6. Multiple commands, and conversational interaction. "ALexa, set a 10 minute timer for rice and a 15-minute timer for the rotisserie" - and Alexa replies "Those timers are set. will there be anything else?"
7. Become a speakerphone.

mike27oct

Re: Context Awareness & Ability
« Reply #3 on: February 23, 2017, 01:28:19 pm »
To OP and others, you need to understand that it is the APPs for gadgets you use and not Alexa that control the way things operate.  Basically, Alexa just is the intermediary between say, a smart plug and the action taken.  IF things do not work correctly it is most likely caused by the app or its interaction with Alexa.

In addition, long drawn out suggestions should be passed on to Amazon, (or the app maker) not this forum.  Members here can't do a thing to change things.  If one does not know how to pass on suggestions to Amazon OR the manufacturer of their smart gadget/app, then they need to find out how.

Offline jwlv

  • *
  • 1473
Re: Context Awareness & Ability
« Reply #4 on: February 23, 2017, 02:22:09 pm »
Although voice biometrics is a nice feature to have, I don't think Amazon want to take on that kind of liability. If Alexa misidentifies a voice and causes financial loss for someone, it'll be headline news. I'm pretty sure Amazon doesn't want that kind of publicity.

I'm not sure how technologically advanced voice print identification is presently. Would it even be possible with consumer grade components? This is an area that I have not studied, so whatever I'm saying here is mere speculation.

coyote

Re: Context Awareness & Ability
« Reply #5 on: February 23, 2017, 05:41:09 pm »
Although voice biometrics is a nice feature to have, I don't think Amazon want to take on that kind of liability. If Alexa misidentifies a voice and causes financial loss for someone, it'll be headline news. I'm pretty sure Amazon doesn't want that kind of publicity.

I'm not sure how technologically advanced voice print identification is presently. Would it even be possible with consumer grade components? This is an area that I have not studied, so whatever I'm saying here is mere speculation.
Google already deploys it on some of their mobile devices. But your point is well taken.

ChrisThomas

Re: Context Awareness & Ability
« Reply #6 on: February 23, 2017, 06:33:22 pm »
I believe the processing power of Alexa, either in Echo or Dot is pretty limited. It basically detects the Alexa key word, and then identifies the start and end of the command, and transmits this to Amazon (AWS) where it then gets interpreted. The processing power there, can be MUCH more powerful than in our local devices, and even better, is not locked, it can be increased in time.

I think identification of users is certainly a feature I have also felt was missing. Would I use it to transfer money, or open doors. Probably not. Security IS an issue there, and it could certainly be spoofed. But it would be useful in terms of interpreting what I am saying.

When I ask Alexa to play music, it accesses my Spotify account. If my wife asked, it should really access her account, not mine. And I can see that being useful, and not a security issue. Same thing for lists, we should really have lists per person, not just one fits all.

Indeed, User sensing, its just another form of context after all, no? It matters what we say, who says it, and as Alexa can manifest in different locations, WHERE it is being said. You could even add When to that list, basically the parser up on AWS really needs to handle context much better.

See, you agree after all, context is probably the biggest missing feature of Alexa right now.  ;)

As for the Apps being at fault, I don't really agree. The Skills, and how they are coded, certainly is not down to Amazon, beyond the API that they provide to 3rd parties for this. BUT, they do provide the API, and they expose this to us in the UI in the Alexa app. Its this that I would like Amazon to improve upon. If you provide a method for providing Aliases to devices, you can paper over an cracks that appear between Alexa and a particular Skill.
« Last Edit: February 23, 2017, 06:36:00 pm by ChrisThomas »