peter This is way out of scope at the moment.
Yeah I thought so.
But itās actually here already, itās a pity Apple hasnāt allowed it on individual software.
Hereās how I would use it if it was available.
I can already say SIRI , whatās the time ? And it gives me the time with an included response , this is much more than I need but it appears very smart and works well.
In ST I could really use this to my advantage on stage, .
Simply activate the on board microphone while in ST ,
Now the software is listening , I provide it with trigger words that send it in to action like,
FIND SONG, PLAY , NEXT SONG ,FIND TEXT, these key words that are triggered only by my voice as I have recorded them in myself and which activate different actions ONLY IN ST and within ST folder, in other words this is a simple search engine that translates words in to text and searches / compares the text to all my songs within a playlist only if I am in playlist mode, and song mode if I am in song mode.
Example, while talking with the audience (chat, chat PAUSE! next song PAUSE ! burning love PAUSE! chat , chat, in the meantime the NEXT SONG IS LOADED AND READY TO SIMPLY HIT START WITH A 3, 4, visual count inšš I think itās doable or will be very soon.
There are plenty of apps out there successfully doing this and translating languages, so this should be possible, we donāt need a complicated SIRI or a TRANSLATOR we simply need a voice recognition that turns sound in to text and a comparator that looks at a specific area that brings up a matching result and takes appropriate action.
Like , as I am speaking to the audience and I say NEXT SONG IS , the software triggers the search enters the song title and immediately takes action to place this found file as the next song. This is the same as saying SIRI ! trigger to listen, WHATS MY SCHEDULE FOR TODAY? and whalla! we get our information with a nice humanised response which we donāt really need or want, so this is very possible already , except silly Apple donāt want us to actually make it work for us in other software.
This is very stupid of them , to me this sort of ability is worth a lot more in ST3 and on stage than it is on my watch as I can lift my hand and see the time or open my diary to see my daily schedule, but I canāt stuff around on stage looking for my next song , to me this sort of capability is worth a lot more in ST3 .
Just think if we could simply say to iPad while in ST, PLAY! love me tender, and the song starts playing , or FIND! Love, and all the LOVE results are presented to choose from or NEXT SONG ! Love me tender, and it gets loaded as the next song, this would eliminate a lot of unnecessary typing and ugliness on stage, I can already do that on my google music player devices I have around the home , I simply say Hey Google play BURNING LOVE, the device connects wirelessly to my router searches the whole world and finds BURNING LOVE and I am listening to it within seconds.
This is already here, so why donāt we have it in dedicated software in a much more simpler way.
We donāt need responses , we donāt need to search the world , we only need to search a small specific area on the device and take very limited action once the file is found. THIS SHOULD BE AVAILABLE TO US ALREADY,
I had powered speakers at least 10 years before they came out in the market not because I was a genius , because I came up with the idea and took action, this applies to this idea, someone is going to do this itās just a matter of it being done.
As I said, language translators are already common, voice search is already common itās a matter of putting these capabilities to good use.
I thought AI was already here, where is it? I donāt see it?š
To add, we have bluetooth ear mics that phones use, why not use that as the sound sourcre to the audio input to ST that is used for verbal remote contr, we normally use audio output for playing music, why not make good use of the audio input to ST for a very useful purpose, this could also be used to jump audio segments verbally , X to Verse, X to Chorus 1 , X to Solo 2, X to End, X TO is the trigger command and Verse, Chorus 1, Solo 2 are the audio segment names , as you say X to triggers the listen mode the name has to match the exact writing of the segment , if theres a match it selects that segment as NEXT TO PLAY , if you change your mind say X Cancel , or X End if you want to jump to ending instead at the last moment.
You simply mute your vocal mic and give verbal command or move away from the main mic and speak in to bluetooth in ear mic, which is constantly monitoring for correct command to do something this should be very possible if given enough thought, even if the song is playing or not you sould be able to control audio segments and search song functions , no fuss no foot dancing or touching the screen, AI is here so lets use it.
All these ideas are just thrown together without much thought of naming , but its the overall conceipt that i am trying to point out , its so nice to verbally say hey Google! Play Croatian top hits and presto it just works, so why cant we do this with individual software?