且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何检测语音识别正在进行中

更新时间:2023-10-09 23:52:40

我终于找到了最终解决方案。



这是简单的优雅将通过苹果审查,它始终工作。只需对 UIControlEventEditingChanged 做出反应,并检测出类似于此的替换字符:

I finally found ultimate solution.

It is simple elegant will pass apple review and it Always work. Just react on UIControlEventEditingChanged and detect existance of replacemnt characterlike this:

-(void)viewDidLoad {
  [super viewDidLoad];

  [self.textField addTarget: self
                     action: @selector(eventEditingChanged:)
           forControlEvents: UIControlEventEditingChanged];
}

-(IBAction)eventEditingChanged:(UITextField *)sender {
  NSRange range = [sender.text rangeOfString: @"\uFFFC"];
  self.sendButton.enabled = range.location==NSNotFound;
}


Finlay我找到了一些解决方案。这是改进的概念nr 3,混合了概念nr 2(基于答案)。

Finlay I've found some solution. This is improved concept nr 3 with mix of concept nr 2 (based on that answer).

-(void)viewDidLoad {
  [super viewDidLoad];

  [self.textField addTarget: self
                     action: @selector(eventEditingChanged:)
           forControlEvents: UIControlEventEditingChanged];
}

-(IBAction)eventEditingChanged:(UITextField *)sender {
  NSString *primaryLanguage = [UITextInputMode currentInputMode].primaryLanguage;

  if ([primaryLanguage isEqualToString: @"dictation"]) {
    self.sendButton.enabled = NO;
  } else {
    // restore normal text field state
    self.sendButton.enabled = self.textField.text.length>0;
  }
}

- (IBAction)sendMessage: (id)sender {
   [self.chatService sendMessage: self.messageTextField.text];
   self.messageTextField.text = @"";
}

- (BOOL)textFieldShouldReturn:(UITextField *)textField {
  if (self.textField.text.length==0 || !self.sendButton.enabled) {
     return NO;
   }
   [self sendMessage: textField];
   return YES;
}

// other UITextFieldDelegate methods ...

现在问题没有出现,因为用户在可能发生时被阻止(恰好在用户按下听写视图上的完成按钮和结果来自语音识别服务之间。

好​​的是公共API使用(只有@听写可能是一个问题,但我应该被Apple接受)。

Now problem doesn't appears since user is blocked when it could happen (exactly between user presses "Done" button on dictation view and when results are coming from speech recognition service.
The good thing is that public API is used (only @"dictation" can be a problem, but I thin it should be accepted by Apple).