Cortana background interactions with Universal Windows app

This article will demonstrate using Cortana and voice commands to activate and launch a Universal Windows app.

Integrating Cortana in your app is basically a four step process.

1. Define Voice Command xml file (VCD file)
2. Install the voice commands defined in VCD file
3. Handle OnActivated method to process voice commands
4. Add Microphone Capability in Package.appxmanifest

Let’s define our VCD file as following:

<?xml version="1.0" encoding="utf-8"?>
<!-- Be sure to use the new v1.1 namespace to utilize the new PhraseTopic feature -->
<VoiceCommands xmlns="">
  <!-- The CommandSet Name is used to programmatically access the CommandSet -->
  <CommandSet xml:lang="en-us" Name="SetColorCommandSet_en-us">
    <!-- The CommandPrefix provides an alternative to your full app name for invocation -->
    <CommandPrefix>Set color</CommandPrefix>
    <!-- The CommandSet Example appears in the global help alongside your app name -->
    <Example> Set color red</Example>

    <Command Name="ShowCommand">
      <Example> Set color Blue </Example>
      <ListenFor> {dictatedShowTerms} </ListenFor>
      <Feedback> Setting color ... </Feedback>
      <Navigate Target="MainPage.xaml" />

    <Command Name="NaturalLanguageCommand">
      <Example> Show me red color</Example>
      <ListenFor>  {naturalLanguage} </ListenFor>
      <Feedback> Showing color... </Feedback>
      <Navigate Target="MainPage.xaml" />

    <PhraseTopic Label="dictatedShowTerms" Scenario="Search">
      <Subject> Blue </Subject>
      <Subject> Black </Subject>
      <Subject> Green </Subject>
      <Subject> Red </Subject>
      <Subject> Yellow </Subject>

    <PhraseTopic Label="naturalLanguage" Scenario="Natural Language">
     <Subject> I want to see Blue color</Subject>
      <Subject> Show me Black color</Subject>
      <Subject> What about Green color!</Subject>
      <Subject> Show Red </Subject>
      <Subject> Paint Yellow </Subject>

Now let’s understand the important elements of this file.
xml:lang="en-us" is how you define culture for the command. one important thing to notice is that your language, region settings and Speech language much match to the culture you have defined in VCD file for the command set. For this article, I have used en-US as the culture and that means that my default language is set to English (United States).

The command prefix is a phrase or set of words that you use to invoke the app. Note that every command you give must start with this.

Show command
The show command is an example of a normal command you define for Cortana to understand what to dictate and pass to the application. The ListenFor defines what to listen for. Whatever you give in curly braces is a PhraseTopic that you can define for the command. In our case it is dictatedShowTerms and there I have just list down names of few colors. Note the Scenario is set to Search. You can also define optional words in the squre braces for ListenFor like following:

<ListenFor> [Show me] {dictatedShowTerms} </ListenFor>  

Natural Language command
The natural language as the name suggest, is where the fun begins with Cortana! You can define natural speaking way of getting voice commands such as “I want to see Blue color”, “Show red” etc. Note that we convey to treat this PhraseTopic as natural language by setting the Scenario to Natural Language enum value.

Feedback will define what Cortana will talk back to the user upon successful detection of the voice command.
Now it’s time to write some code to install the VCD file. For this article, I have used Blank universal windows project template. In the App.xaml.cs file, I have following method defined:

private async Task InstallVoiceCommandsAsync()
            var storageFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///VoiceCommands.xml"));

This method simply gets file from the package and install it using InstallCommandDefinitionsFromStorageFileAsync method. We are calling this method in OnLaunched method as following:

protected override async void OnLaunched(LaunchActivatedEventArgs e)

            await InstallVoiceCommandsAsync();

To process the voice command outputs, we need to define our OnActivated method as following:

protected override void OnActivated(IActivatedEventArgs args)

            rootFrame = Window.Current.Content as Frame;

             // Do not repeat app initialization when the Window already has content,
            // just ensure that the window is active
            if (rootFrame == null)
                 // Create a Frame to act as the navigation context and navigate to the first page
                rootFrame = new Frame();

             // Place the frame in the current Window
            Window.Current.Content = rootFrame;

             if (args.Kind == ActivationKind.VoiceCommand)
                var commandArgs = args as VoiceCommandActivatedEventArgs;

                 if (commandArgs != null)
                    SpeechRecognitionResult speechRecognitionResult = commandArgs.Result;

                    var voiceCommandName = speechRecognitionResult.RulePath[0];
                    var textSpoken = speechRecognitionResult.Text;

                    switch (voiceCommandName)
                        case "NaturalLanguageCommand":
                        case "ShowCommand":
                              if (textSpoken.ToLower().Contains("black"))
                                rootFrame.Navigate(typeof (MainPage), "Black");
                              if (textSpoken.ToLower().Contains("blue"))
                                rootFrame.Navigate(typeof (MainPage), "Blue");
                              if (textSpoken.ToLower().Contains("red"))
                                rootFrame.Navigate(typeof (MainPage), "Red");
                              if (textSpoken.ToLower().Contains("green"))
                                rootFrame.Navigate(typeof (MainPage), "Green");
                              if (textSpoken.ToLower().Contains("yellow"))
                                rootFrame.Navigate(typeof (MainPage), "Yellow");

                            rootFrame.Navigate(typeof (MainPage), "All");



Here the code is pretty much self-explanatory. We process arguments and based on the words we have got in result text, we navigate to MainPage with the color name. Based on this argument, the MainPage will show the color by filling up the rectangle that we have defined in MainPage.xaml. You can download the code supplied at the end of this article to see that part of code. Now it’s time to run our Universal app with Cortana!

Once we run the app for the first time, the vcd will get installed. After that, we can ask Cortana “What can I say” and it will give us list of all the voice commands available across the applications. In that we can see our application as following:

Now, we can simply speak “Set color red”. If Cortana understands what we spoke, we will get dialog similar to following:

We can test our natural language command by speaking “Set color, Show me yellow color”. Upon successful detection of the command, we will see dialog similar to the following:

You can download source code here:

By jay nanavati   Popularity  (932 Views)