Page MenuHomeFeedback Tracker

Expose AI commands/Action Menu commands on UDP interface
Acknowledged, WishlistPublic

Description

With the current Action Menu/AI Control method of using F-keys, it means voice recognition programs have to run on the same PC as A3 and touchscreen programs can't be used to send the commands, as A3 would lose focus and not receive the F-keys when the touchscreen is touched. It would be much better if all the commands available through the F-keys could be triggered through a UDP interface using direct commands.

So for example, a command would look like Team Red/Flank Right. The commands are only shown nested like that to make it easy to follow the list of them but they are actually sent as a single command "Team Red Flank Right".

This would make creating profiles in voice recognition software much simpler, as there would be no need to work out what F-key sequence is required for each command and enter that and associate it with a spoken phrase. Instead, the program could listen for prefix commands like "Team Red", "Team White", etc and if one of those is heard, then it listens for "Flank Left", "Flank Right", etc and once a complete phrase is recognised like "Team Red Flank Left", it looks for a matching command from the A3 list and then sends it over the UDP interface. This is all done without the user needing to pause between saying each part of the phrase and is much more natural for the user and by avoiding the use of the menus, is probably somewhat easier and quicker for A3 to process.

Voice Attack already provides a system of prefix and suffix commands but the UI and presentation of the commands leaves a lot to be desired and of course the user still has to enter the required F-key sequence to be associated with each phrase.

Listening for commands on a UDP interface would also allow for the use of voice recognition programs on tablets and the use of touchscreen programs on a secondary monitor, with several rows of buttons marked with the commands, so the player doesn't have to try and remember obscure key sequences but can just tap a couple of clearly marked on-screen buttons. Some people have disabilities that prevent them from using voice recognition, so this provides a user-friendly alternative for them.

Details

Legacy ID
767647786
Severity
None
Resolution
Open
Reproducibility
Always
Category
Feature Request
Additional Information

There is no need to remove the F-key menu system to implement this additional interface and if BIS decides to replace that with a more intuitive keyboard/mouse operated system (like a commo rose) sometime, that is a completely separate issue from the UDP interface. Ideally the list of commands recognised on the UDP interface should be kept in sync with any other command interface systems but one of the advantages of the system X-plane uses is that it is possible to add new commands whilst keeping the existing ones for redundancy/backwards compatibility. So if someone realises there was a spelling mistake in one of the commands, or that it can be more accurately named, the new command and the old command can co-exist, to avoid the need for users to edit their existing profiles, whilst only the new command can appear in the F-keys or commo rose menus.

This interface can not only be used for AI commands but also for interfacing physical buttons and switches to commands, such as Detonate, Gear Up/Down, etc. These commands can be sent from voice recognition programs but also from touchscreen programs and utilities that interface between games and Arduino boards with physical controls attached. If the control panel is built specifically for A3, then the Arduino firmware can send the required commands over UDP directly, with no need for a utility in the middle and it can be used as a two-way interface if desired, so data can be sent from A3 regarding gear or flaps status, how many seats in the current vehicle are occupied, etc and be indicated by LEDs or other displays connected to the Arduino.

So whilst a simpler and more natural way of using voice recognition to control the AI is my main motivation, implementing the UDP interface provides for a lot of other possibilites. I would think the work required would be fairly minimal, as pressing an F-key sequence obviously results in the appropriate piece of code being triggered for that command, so all that's required is the UDP interface itself, which waits to receive one of the commands and each of those commands is associated with the appropriate piece of code to trigger. This may even be something that could be implemented as a mod but it would probably need BIS to provide a way to hook a UDP interface into the code first.

Event Timeline

doveman edited Additional Information. (Show Details)Oct 19 2015, 6:44 AM
doveman set Category to Feature Request.
doveman set Reproducibility to Always.
doveman set Severity to None.
doveman set Resolution to Open.
doveman set Legacy ID to 767647786.May 8 2016, 12:58 PM
doveman added a subscriber: doveman.

Just wanted to explain that the proposed method is very much compatible with modding. As is done with x-plane, the vanilla commands can be grouped under a prefix like A3/ or Vanilla/ and then any mods have their own prefix, such as ACE/ or TFAR/.

So vanilla commands would look like "A3/Team Red/Line Formation", whilst mod commands would look like "TFAR/Short Range/Channel One" (I'm just making up examples here).

There has to be some care taken to try and avoid any conflict between commands and to make them match what people would say, to make it easy for the voice recognition program to find the matching command automatically without the user having to manually map spoken phrases to commands, as obviously the player doesn't say the prefix but just "Team Red Line Formation" or "Short Range Channel One", but the vanilla commands would be defined first and then modders would be expected to design their commands to avoid conflict.

Even where there is more than one mod that does the same thing and thus the commands have to be the same or similar, this can easily be handled by the voice recognition program having the option to enable/disable groups of commands. So if modX and modY do much the same thing and use the same command e.g. "Detonate explosives" and the player is using modX, he would just disable modY's command group in the program, so that when the program recognises the phrase "Detonate explosives" it only finds a match in the enabled modX's command list "modX/Detonate explosives".

In fact, even where a mod replaces a vanilla function and thus needs to use the same command, the program can use a system whereby the mod's list of commands, which are imported into the program and added to the existing list of commands, contains markers to indicate which commands should replace the vanilla ones. So when that mod's command group is enabled, it would automatically disable those vanilla commands that it replaces. For example A3/Pop Flare could be disabled in place of ACE/Pop Flare, so when the user says "Pop Flare" there is only one matching command that the program can find.

The voice recognition program would still need to provide the option to manually map phrases to commands if it proves necessary but in most cases the automatic matching should be sufficient.