Menu

Skip to content
AppleScriptの穴
  • Home
  • Products
  • Books
  • Docs
  • Events
  • Forum
  • About This Blog
  • License
  • 仕事依頼

AppleScriptの穴

Useful & Practical AppleScript archive. Click '★Click Here to Open This Script' Link to download each AppleScript

カテゴリー: Text to Speech

EnhancedやPremiumなどの高音質TTS音声を取得

Posted on 10月 13 by Takaaki Naganoya

macOS上のTTS Voiceから高音質のものだけを抽出するAppleScriptです。

macOS上のTTS環境が変化しており、「VoiceIdentifierが『premium』で終了しているもの」といったルールでは抽出できなくなりました。

そこで、実際にTTS Voiceを調査してmacOS 15時代の実態に合っている抽出を行なってみました(コロコロ変わるので次のOSで変わっているかも?)。

IDに「premium」が含まれるもののほか、「enhanced」が含まれているものが高音質音声であるもようです。

なお、実行結果はTTS Voiceのインストール状況に応じて変わる可能性が高く、実行環境ごとに異なるものとなるはずです。

AppleScript名:EnhancedやPremiumなどの高音質音声を取得.scptd
— Created 2018-02-15 by Takaaki Naganoya
— Modified 2024-10-13 by Takaaki Naganoya
— 2018-2024 Piyomaru Software
use AppleScript version "2.8"
use scripting additions
use framework "Foundation"

property NSColor : a reference to current application’s NSColor
property NSArray : a reference to current application’s NSArray
property NSSortDescriptor : a reference to current application’s NSSortDescriptor

set vList to getTTSPremiumVoiceName() of me
–> {"Allison(拡張)", "Ava(拡張)", "Chantal(拡張)", "Daniel(拡張)", "Evan(拡張)", "Joelle(拡張)", "Kate(拡張)", "Kyoko(拡張)", "Moira(拡張)", "Nathan(拡張)", "Noelle(拡張)", "Otoya(拡張)", "Samantha(拡張)", "Susan(拡張)", "Tom(拡張)", "Zoe(拡張)", "Amélie(プレミアム)", "Ava(プレミアム)", "Jamie(プレミアム)", "Zoe(プレミアム)"}

set vIDList to getTTSPremiumVoiceID() of me
–> {"com.apple.voice.enhanced.en-US.Allison", "com.apple.voice.enhanced.en-US.Ava", "com.apple.voice.enhanced.fr-CA.Chantal", "com.apple.voice.enhanced.en-GB.Daniel", "com.apple.voice.enhanced.en-US.Evan", "com.apple.voice.enhanced.en-US.Joelle", "com.apple.voice.enhanced.en-GB.Kate", "com.apple.voice.enhanced.ja-JP.Kyoko", "com.apple.voice.enhanced.en-IE.Moira", "com.apple.voice.enhanced.en-US.Nathan", "com.apple.voice.enhanced.en-US.Noelle", "com.apple.voice.enhanced.ja-JP.Otoya", "com.apple.voice.enhanced.en-US.Samantha", "com.apple.voice.enhanced.en-US.Susan", "com.apple.voice.enhanced.en-US.Tom", "com.apple.voice.enhanced.en-US.Zoe", "com.apple.voice.premium.fr-CA.Amelie", "com.apple.voice.premium.en-US.Ava", "com.apple.voice.premium.en-GB.Malcolm", "com.apple.voice.premium.en-US.Zoe"}

on getTTSPremiumVoiceName()
  set outArray to current application’s NSMutableArray’s new()
  
  
–Make Installed Voice List
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceIdentifier contains[cd] %@ ", "enhanced")
  
set afilteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (afilteredArray’s valueForKey:"VoiceName") as list
  
  
set bPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceIdentifier contains[cd] %@ ", "premium")
  
set afilteredArray to outArray’s filteredArrayUsingPredicate:bPredicate
  
set bResList to (afilteredArray’s valueForKey:"VoiceName") as list
  
  
  
return (aResList & bResList)
end getTTSPremiumVoiceName

on getTTSPremiumVoiceID()
  set outArray to current application’s NSMutableArray’s new()
  
  
–Make Installed Voice List
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceIdentifier contains[cd] %@ ", "enhanced")
  
set afilteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (afilteredArray’s valueForKey:"VoiceIdentifier") as list
  
  
set bPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceIdentifier contains[cd] %@ ", "premium")
  
set afilteredArray to outArray’s filteredArrayUsingPredicate:bPredicate
  
set bResList to (afilteredArray’s valueForKey:"VoiceIdentifier") as list
  
  
  
return (aResList & bResList)
end getTTSPremiumVoiceID

★Click Here to Open This Script 

Posted in Text to Speech | Tagged 15.0savvy | Leave a comment

AVSpeechSynthesizerで読み上げテスト

Posted on 10月 13 by Takaaki Naganoya

AppleScriptのビルトインコマンド「say」が日本語環境で一部のTTSボイスを正しく指定できなくなっている今日このごろ。

macOSのTTS環境がアップデートされ続けている中で、これに「say」コマンドが追いつけるのかどうか、非常に怪しい雰囲気になってきました。

そこで、AVSpeechSynthesizerを呼び出してsayコマンドを使わずにテキスト音声読み上げする方法を調べてみました。そんなに難しくはありません。

注意点は、TTS Voiceキャラクタのうち、com.apple.voiceのMac OS X系TSSキャラクタは使える。com.apple.speechのClassic MacOS系のTTSキャラクタも使える。com.apple.eloquenceのEloquence系TTSキャラクタも使える。Siri系のTTSのみ使えないということです。

これで、読み上げ内容のファイルへの保存さえできれば、sayコマンドはAppleScript自体でコマンドをのっとって、sayコマンド側ではなくAppleScript側で処理を肩代わりすることが可能になるでしょう。

AppleScript名:AVSpeechSynthesizerで読み上げテスト(言語とテキストを指定).scptd
—
–  Created by: Takaaki Naganoya
–  Created on: 2024/10/12
—
–  Copyright © 2024 Piyomaru Software, All Rights Reserved
—

use AppleScript version "2.4" — Yosemite (10.10) or later
use framework "Foundation"
use framework "AVFoundation"
use scripting additions

set aSynth to current application’s AVSpeechSynthesizer’s alloc()’s init()

set aText to "昔、昔、ある所に、おじいさんとおばあさんが住んでいました。"
set aUttr to current application’s AVSpeechUtterance’s speechUtteranceWithString:(aText)

set aVoice to current application’s AVSpeechSynthesisVoice’s voiceWithLanguage:"ja-JP" –日本語のデフォルトボイスで読み上げ
aUttr’s setVoice:aVoice
aUttr’s setRate:0.6

aSynth’s speakUtterance:aUttr

★Click Here to Open This Script 

AppleScript名:AVSpeechSynthesizerで読み上げテスト(Voice IDとテキストを指定).scptd
—
–  Created by: Takaaki Naganoya
–  Created on: 2024/10/12
—
–  Copyright © 2024 Piyomaru Software, All Rights Reserved
—

use AppleScript version "2.4" — Yosemite (10.10) or later
use framework "Foundation"
use framework "AVFoundation"
use scripting additions

set aSynth to current application’s AVSpeechSynthesizer’s alloc()’s init()

set aText to "むかーし、むかし、ある所に、おじいさんとおばあさんが住んでいました。"
set aUttr to current application’s AVSpeechUtterance’s speechUtteranceWithString:(aText)

set aVoice to current application’s AVSpeechSynthesisVoice’s voiceWithIdentifier:"com.apple.voice.enhanced.ja-JP.Kyoko" –voice系、eloquence系はOK。Siri系は指定できない(はず)
aUttr’s setVoice:aVoice
aUttr’s setRate:0.6 –0.0から1.0まで。1.0が高速

aSynth’s speakUtterance:aUttr

★Click Here to Open This Script 

Posted in Text to Speech | Tagged 15.0savvy AVSpeechSynthesizer | Leave a comment

NSSpeechSynthesizerとAVSpeechSynthesisVoiceで返ってくるTTS Voice IDの違いを計算する

Posted on 10月 11 by Takaaki Naganoya

macOS 15上でNSSpeechSynthesizerとAVSpeechSynthesisVoiceで返ってくるTTS Voice IDの数が違う、

NSSpeechSynthesizer :205
AVSpeechSynthesisVoice:222

ことが気になりました。NSSpeechSynthesizerについては、あまり真面目にメンテナンスされていないからAVSpeechSynthesisVoiceを使えといった声も聞こえてきますが、具体的にどのあたりに「差」があるのかが気になります。

そこで、実際にAppleScriptで差分を計算して確認してみました。

–>{addItems:{“com.apple.ttsbundle.siri_Nicky_en-US_premium”, “com.apple.ttsbundle.siri_Hattori_ja-JP_premium”, “com.apple.ttsbundle.siri_Helena_de-DE_compact”, “com.apple.ttsbundle.siri_Yu-Shu_zh-CN_compact”, “com.apple.ttsbundle.siri_Gordon_en-AU_compact”, “com.apple.ttsbundle.siri_Martha_en-GB_compact”, “com.apple.ttsbundle.siri_O-Ren_ja-JP_premium”, “com.apple.ttsbundle.siri_Hattori_ja-JP_compact”, “com.apple.ttsbundle.siri_Nicky_en-US_compact”, “com.apple.ttsbundle.siri_Martin_de-DE_compact”, “com.apple.ttsbundle.siri_Li-Mu_zh-CN_compact”, “com.apple.ttsbundle.siri_Dan_fr-FR_compact”, “com.apple.ttsbundle.siri_Aaron_en-US_compact”, “com.apple.ttsbundle.siri_Catherine_en-AU_compact”, “com.apple.ttsbundle.siri_Marie_fr-FR_compact”, “com.apple.ttsbundle.siri_O-Ren_ja-JP_compact”, “com.apple.ttsbundle.siri_Arthur_en-GB_compact”}, minusItems:{}}

17個のTTS Voice IDが追加されたことを確認できました。AVSpeechSynthesisVoice側で検出されていないTTS Voiceはないことも確認。

TTS Voice IDの個人的な理解は上記の表のとおりです。NSSpeechSynthesizerで検出できなかったのは、「com.apple.ttsbundle」ではじまるSiri系の(おそらく他のメーカーからのOEM)Voice Character 17個です。

AppleScript名:NSSpeechSynthesizerとAVSpeechSynthesisVoiceで返ってくるTTS Voice IDの違いを計算する
— Created 2024-10-11 by Takaaki Naganoya
— 2019-2024 Piyomaru Software
use AppleScript version "2.8"
use scripting additions
use framework "Foundation"
use framework "AppKit"
use framework "AVFoundation"

set aList to current application’s NSSpeechSynthesizer’s availableVoices() as list
set bList to (current application’s AVSpeechSynthesisVoice’s speechVoices()’s valueForKey:"identifier") as list
set a1Res to checkAllItemsAreSame(aList, bList) of me
–>{addItems:{"com.apple.ttsbundle.siri_Nicky_en-US_premium", "com.apple.ttsbundle.siri_Hattori_ja-JP_premium", "com.apple.ttsbundle.siri_Helena_de-DE_compact", "com.apple.ttsbundle.siri_Yu-Shu_zh-CN_compact", "com.apple.ttsbundle.siri_Gordon_en-AU_compact", "com.apple.ttsbundle.siri_Martha_en-GB_compact", "com.apple.ttsbundle.siri_O-Ren_ja-JP_premium", "com.apple.ttsbundle.siri_Hattori_ja-JP_compact", "com.apple.ttsbundle.siri_Nicky_en-US_compact", "com.apple.ttsbundle.siri_Martin_de-DE_compact", "com.apple.ttsbundle.siri_Li-Mu_zh-CN_compact", "com.apple.ttsbundle.siri_Dan_fr-FR_compact", "com.apple.ttsbundle.siri_Aaron_en-US_compact", "com.apple.ttsbundle.siri_Catherine_en-AU_compact", "com.apple.ttsbundle.siri_Marie_fr-FR_compact", "com.apple.ttsbundle.siri_O-Ren_ja-JP_compact", "com.apple.ttsbundle.siri_Arthur_en-GB_compact"}, minusItems:{}}

–1D List同士の全要素が(登場順序が変更になっていても)同じかどうかをチェック
on checkAllItemsAreSame(aList, bList)
  set dRes to getDiffBetweenLists(aList, bList) of me
  
set ddRes to (dRes is equal to {addItems:{}, minusItems:{}}) as boolean
  
if ddRes = true then
    return true
  else
    return dRes
  end if
end checkAllItemsAreSame

–1D List同士のdiffを検出
on getDiffBetweenLists(aArray as list, bArray as list)
  set allSet to current application’s NSMutableSet’s setWithArray:aArray
  
allSet’s addObjectsFromArray:bArray
  
  
–重複する要素のみ抜き出す
  
set duplicateSet to current application’s NSMutableSet’s setWithArray:aArray
  
duplicateSet’s intersectSet:(current application’s NSSet’s setWithArray:bArray)
  
  
–重複部分を削除する
  
allSet’s minusSet:duplicateSet
  
set resArray to (allSet’s allObjects()) as list
  
  
set aSet to current application’s NSMutableSet’s setWithArray:aArray
  
set bSet to current application’s NSMutableSet’s setWithArray:resArray
  
aSet’s intersectSet:bSet –積集合
  
set addRes to aSet’s allObjects() as list
  
  
set cSet to current application’s NSMutableSet’s setWithArray:bArray
  
cSet’s intersectSet:bSet –積集合
  
set minusRes to cSet’s allObjects() as list
  
  
return {addItems:minusRes, minusItems:addRes}
end getDiffBetweenLists

★Click Here to Open This Script 

Posted in list Text to Speech | Tagged 15.0savvy | Leave a comment

macOS 15でも変化したText to Speech環境

Posted on 10月 9 by Takaaki Naganoya

macOS 15では、大幅にText To Speech環境が変わりました。従来のTTS音声キャラクタにくわえて、Speechify社の「Eloquence Text to Speech」のOEM供給を受けた(?)TTS音声が多数追加されているようです。

# Eloquenceという名前のTTSが、オープンソース系とか複数存在しているのでOEMというわけでもなさそう?

# よく見たら、macOS 13でもEloquenceのTTS Voiceが存在していることを確認。気づいただけだったのか。

EloquenceによるTTS音声は、何かの言語専用というわけでなく複数言語に対応しているようです(1つで複数言語対応というわけではなく、複数言語バージョンが提供されている模様)。com.apple.eloquenceではじまるIDのTTSキャラクタがOEM供給を受けているものだと見ています。

# オープンソース系の(for Android)Eloquenceだとサポートする言語が少なかったりして(日本語はなかった)、このいろんな言語をサポートしているEloquence TTSはいったい何なんだろうかと、、、、

これらをAppleScriptのsayコマンドで指定することはできませんが、システム設定で読み上げキャラクタに指定しておけば、デフォルト音声を用いての読み上げで利用できそうです(Siri系のTTSキャラクタはこうして利用できていました)。

TTSキャラクタについては、macOSのメジャーアップデートごとに増加しており、macOS 15では209に達しています(macOS 11では50、macOS 13では174)。

AppleScript名:現在利用可能なTTSボイスIDの一覧(macOS 15)..scptd
—
–  Created by: Takaaki Naganoya
–  Created on: 2020/06/09
—
–  Copyright © 2020 Piyomaru Software, All Rights Reserved
—

use AppleScript version "2.4" — Yosemite (10.10) or later
use framework "Foundation"
use framework "AppKit"
use scripting additions

set aSynth to current application’s NSSpeechSynthesizer’s availableVoices() as list
–> macOS 15 (209)
— {"com.apple.speech.synthesis.voice.Agnes", "com.apple.speech.synthesis.voice.Albert", "com.apple.speech.synthesis.voice.Alex", "com.apple.voice.compact.it-IT.Alice", "com.apple.voice.compact.en-US.Allison", "com.apple.voice.enhanced.en-US.Allison", "com.apple.voice.compact.sv-SE.Alva", "com.apple.voice.compact.fr-CA.Amelie", "com.apple.voice.premium.fr-CA.Amelie", "com.apple.voice.compact.ms-MY.Amira", "com.apple.voice.compact.de-DE.Anna", "com.apple.voice.compact.en-US.Ava", "com.apple.voice.enhanced.en-US.Ava", "com.apple.voice.premium.en-US.Ava", "com.apple.speech.synthesis.voice.BadNews", "com.apple.speech.synthesis.voice.Bahh", "com.apple.speech.synthesis.voice.Bells", "com.apple.speech.synthesis.voice.Boing", "com.apple.speech.synthesis.voice.Bruce", "com.apple.speech.synthesis.voice.Bubbles", "com.apple.voice.compact.he-IL.Carmit", "com.apple.speech.synthesis.voice.Cellos", "com.apple.voice.enhanced.fr-CA.Chantal", "com.apple.voice.compact.id-ID.Damayanti", "com.apple.voice.compact.en-GB.Daniel", "com.apple.voice.enhanced.en-GB.Daniel", "com.apple.voice.compact.bg-BG.Daria", "com.apple.speech.synthesis.voice.Deranged", "com.apple.eloquence.de-DE.Eddy", "com.apple.eloquence.en-GB.Eddy", "com.apple.eloquence.en-US.Eddy", "com.apple.eloquence.es-ES.Eddy", "com.apple.eloquence.es-MX.Eddy", "com.apple.eloquence.fi-FI.Eddy", "com.apple.eloquence.fr-CA.Eddy", "com.apple.eloquence.fr-FR.Eddy", "com.apple.eloquence.it-IT.Eddy", "com.apple.eloquence.ja-JP.Eddy", "com.apple.eloquence.ko-KR.Eddy", "com.apple.eloquence.pt-BR.Eddy", "com.apple.eloquence.zh-CN.Eddy", "com.apple.eloquence.zh-TW.Eddy", "com.apple.voice.compact.nl-BE.Ellen", "com.apple.voice.compact.en-US.Evan", "com.apple.voice.enhanced.en-US.Evan", "com.apple.eloquence.de-DE.Flo", "com.apple.eloquence.en-GB.Flo", "com.apple.eloquence.en-US.Flo", "com.apple.eloquence.es-ES.Flo", "com.apple.eloquence.es-MX.Flo", "com.apple.eloquence.fi-FI.Flo", "com.apple.eloquence.fr-CA.Flo", "com.apple.eloquence.fr-FR.Flo", "com.apple.eloquence.it-IT.Flo", "com.apple.eloquence.ja-JP.Flo", "com.apple.eloquence.ko-KR.Flo", "com.apple.eloquence.pt-BR.Flo", "com.apple.eloquence.zh-CN.Flo", "com.apple.eloquence.zh-TW.Flo", "com.apple.speech.synthesis.voice.Fred", "com.apple.speech.synthesis.voice.GoodNews", "com.apple.eloquence.de-DE.Grandma", "com.apple.eloquence.en-GB.Grandma", "com.apple.eloquence.en-US.Grandma", "com.apple.eloquence.es-ES.Grandma", "com.apple.eloquence.es-MX.Grandma", "com.apple.eloquence.fi-FI.Grandma", "com.apple.eloquence.fr-CA.Grandma", "com.apple.eloquence.fr-FR.Grandma", "com.apple.eloquence.it-IT.Grandma", "com.apple.eloquence.ja-JP.Grandma", "com.apple.eloquence.ko-KR.Grandma", "com.apple.eloquence.pt-BR.Grandma", "com.apple.eloquence.zh-CN.Grandma", "com.apple.eloquence.zh-TW.Grandma", "com.apple.eloquence.de-DE.Grandpa", "com.apple.eloquence.en-GB.Grandpa", "com.apple.eloquence.en-US.Grandpa", "com.apple.eloquence.es-ES.Grandpa", "com.apple.eloquence.es-MX.Grandpa", "com.apple.eloquence.fi-FI.Grandpa", "com.apple.eloquence.fr-CA.Grandpa", "com.apple.eloquence.fr-FR.Grandpa", "com.apple.eloquence.it-IT.Grandpa", "com.apple.eloquence.ja-JP.Grandpa", "com.apple.eloquence.ko-KR.Grandpa", "com.apple.eloquence.pt-BR.Grandpa", "com.apple.eloquence.zh-CN.Grandpa", "com.apple.eloquence.zh-TW.Grandpa", "com.apple.speech.synthesis.voice.Hysterical", "com.apple.voice.compact.ro-RO.Ioana", "com.apple.eloquence.fr-FR.Jacques", "com.apple.voice.compact.pt-PT.Joana", "com.apple.voice.enhanced.en-US.Joelle", "com.apple.speech.synthesis.voice.Junior", "com.apple.voice.compact.th-TH.Kanya", "com.apple.voice.compact.en-AU.Karen", "com.apple.voice.enhanced.en-GB.Kate", "com.apple.speech.synthesis.voice.Kathy", "com.apple.voice.compact.ja-JP.Kyoko", "com.apple.voice.enhanced.ja-JP.Kyoko", "com.apple.voice.compact.hr-HR.Lana", "com.apple.voice.compact.sk-SK.Laura", "com.apple.voice.compact.hi-IN.Lekha", "com.apple.voice.compact.uk-UA.Lesya", "com.apple.voice.compact.vi-VN.Linh", "com.apple.voice.compact.pt-BR.Luciana", "com.apple.voice.compact.ar-001.Maged", "com.apple.voice.premium.en-GB.Malcolm", "com.apple.voice.compact.hu-HU.Mariska", "com.apple.voice.compact.zh-TW.Meijia", "com.apple.voice.compact.el-GR.Melina", "com.apple.voice.compact.ru-RU.Milena", "com.apple.voice.compact.en-IE.Moira", "com.apple.voice.enhanced.en-IE.Moira", "com.apple.voice.compact.es-ES.Monica", "com.apple.voice.compact.ca-ES.Montserrat", "com.apple.voice.enhanced.en-US.Nathan", "com.apple.voice.enhanced.en-US.Noelle", "com.apple.voice.compact.nb-NO.Nora", "com.apple.speech.synthesis.voice.Organ", "com.apple.voice.compact.ja-JP.Otoya", "com.apple.voice.enhanced.ja-JP.Otoya", "com.apple.voice.compact.es-MX.Paulina", "com.apple.speech.synthesis.voice.Princess", "com.apple.speech.synthesis.voice.Ralph", "com.apple.eloquence.de-DE.Reed", "com.apple.eloquence.en-GB.Reed", "com.apple.eloquence.en-US.Reed", "com.apple.eloquence.es-ES.Reed", "com.apple.eloquence.es-MX.Reed", "com.apple.eloquence.fi-FI.Reed", "com.apple.eloquence.fr-CA.Reed", "com.apple.eloquence.it-IT.Reed", "com.apple.eloquence.ja-JP.Reed", "com.apple.eloquence.ko-KR.Reed", "com.apple.eloquence.pt-BR.Reed", "com.apple.eloquence.zh-CN.Reed", "com.apple.eloquence.zh-TW.Reed", "com.apple.voice.compact.en-IN.Rishi", "com.apple.eloquence.de-DE.Rocko", "com.apple.eloquence.en-GB.Rocko", "com.apple.eloquence.en-US.Rocko", "com.apple.eloquence.es-ES.Rocko", "com.apple.eloquence.es-MX.Rocko", "com.apple.eloquence.fi-FI.Rocko", "com.apple.eloquence.fr-CA.Rocko", "com.apple.eloquence.fr-FR.Rocko", "com.apple.eloquence.it-IT.Rocko", "com.apple.eloquence.ja-JP.Rocko", "com.apple.eloquence.ko-KR.Rocko", "com.apple.eloquence.pt-BR.Rocko", "com.apple.eloquence.zh-CN.Rocko", "com.apple.eloquence.zh-TW.Rocko", "com.apple.voice.compact.en-US.Samantha", "com.apple.voice.enhanced.en-US.Samantha", "com.apple.eloquence.de-DE.Sandy", "com.apple.eloquence.en-GB.Sandy", "com.apple.eloquence.en-US.Sandy", "com.apple.eloquence.es-ES.Sandy", "com.apple.eloquence.es-MX.Sandy", "com.apple.eloquence.fi-FI.Sandy", "com.apple.eloquence.fr-CA.Sandy", "com.apple.eloquence.fr-FR.Sandy", "com.apple.eloquence.it-IT.Sandy", "com.apple.eloquence.ja-JP.Sandy", "com.apple.eloquence.ko-KR.Sandy", "com.apple.eloquence.pt-BR.Sandy", "com.apple.eloquence.zh-CN.Sandy", "com.apple.eloquence.zh-TW.Sandy", "com.apple.voice.compact.da-DK.Sara", "com.apple.voice.compact.fi-FI.Satu", "com.apple.eloquence.de-DE.Shelley", "com.apple.eloquence.en-GB.Shelley", "com.apple.eloquence.en-US.Shelley", "com.apple.eloquence.es-ES.Shelley", "com.apple.eloquence.es-MX.Shelley", "com.apple.eloquence.fi-FI.Shelley", "com.apple.eloquence.fr-CA.Shelley", "com.apple.eloquence.fr-FR.Shelley", "com.apple.eloquence.it-IT.Shelley", "com.apple.eloquence.ja-JP.Shelley", "com.apple.eloquence.ko-KR.Shelley", "com.apple.eloquence.pt-BR.Shelley", "com.apple.eloquence.zh-CN.Shelley", "com.apple.eloquence.zh-TW.Shelley", "com.apple.voice.compact.zh-HK.Sinji", "com.apple.voice.compact.en-US.Susan", "com.apple.voice.enhanced.en-US.Susan", "com.apple.voice.compact.en-ZA.Tessa", "com.apple.voice.compact.fr-FR.Thomas", "com.apple.voice.compact.sl-SI.Tina", "com.apple.voice.compact.zh-CN.Tingting", "com.apple.voice.compact.en-US.Tom", "com.apple.voice.enhanced.en-US.Tom", "com.apple.speech.synthesis.voice.Trinoids", "com.apple.speech.synthesis.voice.Whisper", "com.apple.voice.compact.nl-NL.Xander", "com.apple.voice.compact.tr-TR.Yelda", "com.apple.voice.compact.ko-KR.Yuna", "com.apple.speech.synthesis.voice.Zarvox", "com.apple.voice.enhanced.en-US.Zoe", "com.apple.voice.premium.en-US.Zoe", "com.apple.voice.compact.pl-PL.Zosia", "com.apple.voice.compact.cs-CZ.Zuzana"}

★Click Here to Open This Script 

AppleScript名:AVSpeechSynthesisVoiceのIDを取得_macOS15.0.scpt
— Created 2018-02-15 by Takaaki Naganoya
— 2018 Piyomaru Software
use AppleScript version "2.8"
use scripting additions
use framework "Foundation"
use framework "AVFoundation"

set aList to (current application’s AVSpeechSynthesisVoice’s speechVoices()’s valueForKey:"identifier") as list
–macOS 13 (190)
–> {"com.apple.voice.compact.ar-001.Maged", "com.apple.voice.compact.bg-BG.Daria", "com.apple.voice.compact.ca-ES.Montserrat", "com.apple.voice.compact.cs-CZ.Zuzana", "com.apple.voice.compact.da-DK.Sara", "com.apple.eloquence.de-DE.Sandy", "com.apple.eloquence.de-DE.Shelley", "com.apple.ttsbundle.siri_Helena_de-DE_compact", "com.apple.eloquence.de-DE.Grandma", "com.apple.eloquence.de-DE.Grandpa", "com.apple.eloquence.de-DE.Eddy", "com.apple.eloquence.de-DE.Reed", "com.apple.voice.compact.de-DE.Anna", "com.apple.ttsbundle.siri_Martin_de-DE_compact", "com.apple.eloquence.de-DE.Rocko", "com.apple.eloquence.de-DE.Flo", "com.apple.voice.compact.el-GR.Melina", "com.apple.ttsbundle.siri_Gordon_en-AU_compact", "com.apple.voice.compact.en-AU.Karen", "com.apple.ttsbundle.siri_Catherine_en-AU_compact", "com.apple.voice.premium.en-GB.Malcolm", "com.apple.voice.enhanced.en-GB.Daniel", "com.apple.ttsbundle.Oliver-premium", "com.apple.voice.enhanced.en-GB.Kate", "com.apple.eloquence.en-GB.Rocko", "com.apple.eloquence.en-GB.Shelley", "com.apple.ttsbundle.Oliver-compact", "com.apple.voice.compact.en-GB.Daniel", "com.apple.ttsbundle.siri_Martha_en-GB_compact", "com.apple.eloquence.en-GB.Grandma", "com.apple.eloquence.en-GB.Grandpa", "com.apple.eloquence.en-GB.Flo", "com.apple.eloquence.en-GB.Eddy", "com.apple.eloquence.en-GB.Reed", "com.apple.eloquence.en-GB.Sandy", "com.apple.ttsbundle.siri_Arthur_en-GB_compact", "com.apple.voice.enhanced.en-IE.Moira", "com.apple.voice.compact.en-IE.Moira", "com.apple.voice.compact.en-IN.Rishi", "com.apple.voice.premium.en-US.Zoe", "com.apple.voice.premium.en-US.Ava", "com.apple.voice.enhanced.en-US.Samantha", "com.apple.voice.enhanced.en-US.Evan", "com.apple.voice.enhanced.en-US.Zoe", "com.apple.voice.enhanced.en-US.Joelle", "com.apple.voice.enhanced.en-US.Susan", "com.apple.voice.enhanced.en-US.Nathan", "com.apple.voice.enhanced.en-US.Tom", "com.apple.voice.enhanced.en-US.Noelle", "com.apple.eloquence.en-US.Flo", "com.apple.speech.synthesis.voice.Albert", "com.apple.speech.synthesis.voice.Bahh", "com.apple.speech.synthesis.voice.Fred", "com.apple.speech.synthesis.voice.Hysterical", "com.apple.voice.compact.en-US.Allison", "com.apple.speech.synthesis.voice.Organ", "com.apple.speech.synthesis.voice.Cellos", "com.apple.voice.compact.en-US.Evan", "com.apple.speech.synthesis.voice.Zarvox", "com.apple.eloquence.en-US.Rocko", "com.apple.eloquence.en-US.Shelley", "com.apple.speech.synthesis.voice.Princess", "com.apple.eloquence.en-US.Grandma", "com.apple.eloquence.en-US.Eddy", "com.apple.speech.synthesis.voice.Bells", "com.apple.eloquence.en-US.Grandpa", "com.apple.speech.synthesis.voice.Trinoids", "com.apple.speech.synthesis.voice.Kathy", "com.apple.eloquence.en-US.Reed", "com.apple.speech.synthesis.voice.Boing", "com.apple.speech.synthesis.voice.GoodNews", "com.apple.speech.synthesis.voice.Whisper", "com.apple.speech.synthesis.voice.Bruce", "com.apple.speech.synthesis.voice.Deranged", "com.apple.ttsbundle.siri_Nicky_en-US_compact", "com.apple.speech.synthesis.voice.BadNews", "com.apple.ttsbundle.siri_Aaron_en-US_compact", "com.apple.speech.synthesis.voice.Bubbles", "com.apple.voice.compact.en-US.Susan", "com.apple.voice.compact.en-US.Tom", "com.apple.speech.synthesis.voice.Agnes", "com.apple.voice.compact.en-US.Samantha", "com.apple.eloquence.en-US.Sandy", "com.apple.speech.synthesis.voice.Junior", "com.apple.voice.compact.en-US.Ava", "com.apple.speech.synthesis.voice.Ralph", "com.apple.voice.compact.en-ZA.Tessa", "com.apple.eloquence.es-ES.Shelley", "com.apple.eloquence.es-ES.Grandma", "com.apple.eloquence.es-ES.Rocko", "com.apple.eloquence.es-ES.Grandpa", "com.apple.eloquence.es-ES.Flo", "com.apple.eloquence.es-ES.Sandy", "com.apple.voice.compact.es-ES.Monica", "com.apple.eloquence.es-ES.Eddy", "com.apple.eloquence.es-ES.Reed", "com.apple.eloquence.es-MX.Rocko", "com.apple.voice.compact.es-MX.Paulina", "com.apple.eloquence.es-MX.Flo", "com.apple.eloquence.es-MX.Sandy", "com.apple.eloquence.es-MX.Eddy", "com.apple.eloquence.es-MX.Shelley", "com.apple.eloquence.es-MX.Reed", "com.apple.eloquence.es-MX.Grandma", "com.apple.eloquence.es-MX.Grandpa", "com.apple.eloquence.fi-FI.Shelley", "com.apple.eloquence.fi-FI.Grandma", "com.apple.eloquence.fi-FI.Grandpa", "com.apple.eloquence.fi-FI.Sandy", "com.apple.voice.compact.fi-FI.Satu", "com.apple.eloquence.fi-FI.Eddy", "com.apple.eloquence.fi-FI.Rocko", "com.apple.eloquence.fi-FI.Reed", "com.apple.eloquence.fi-FI.Flo", "com.apple.voice.premium.fr-CA.Amelie", "com.apple.voice.enhanced.fr-CA.Chantal", "com.apple.eloquence.fr-CA.Shelley", "com.apple.eloquence.fr-CA.Grandma", "com.apple.eloquence.fr-CA.Grandpa", "com.apple.eloquence.fr-CA.Rocko", "com.apple.eloquence.fr-CA.Eddy", "com.apple.eloquence.fr-CA.Reed", "com.apple.voice.compact.fr-CA.Amelie", "com.apple.eloquence.fr-CA.Flo", "com.apple.eloquence.fr-CA.Sandy", "com.apple.eloquence.fr-FR.Grandma", "com.apple.eloquence.fr-FR.Flo", "com.apple.eloquence.fr-FR.Rocko", "com.apple.eloquence.fr-FR.Grandpa", "com.apple.eloquence.fr-FR.Sandy", "com.apple.eloquence.fr-FR.Eddy", "com.apple.voice.compact.fr-FR.Thomas", "com.apple.ttsbundle.siri_Dan_fr-FR_compact", "com.apple.eloquence.fr-FR.Jacques", "com.apple.ttsbundle.siri_Marie_fr-FR_compact", "com.apple.eloquence.fr-FR.Shelley", "com.apple.voice.compact.he-IL.Carmit", "com.apple.voice.compact.hi-IN.Lekha", "com.apple.voice.compact.hr-HR.Lana", "com.apple.voice.compact.hu-HU.Mariska", "com.apple.voice.compact.id-ID.Damayanti", "com.apple.eloquence.it-IT.Eddy", "com.apple.eloquence.it-IT.Sandy", "com.apple.eloquence.it-IT.Reed", "com.apple.eloquence.it-IT.Shelley", "com.apple.eloquence.it-IT.Grandma", "com.apple.eloquence.it-IT.Grandpa", "com.apple.eloquence.it-IT.Flo", "com.apple.eloquence.it-IT.Rocko", "com.apple.voice.compact.it-IT.Alice", "com.apple.ttsbundle.siri_Hattori_ja-JP_premium", "com.apple.ttsbundle.siri_O-Ren_ja-JP_premium", "com.apple.voice.enhanced.ja-JP.Kyoko", "com.apple.voice.enhanced.ja-JP.Otoya", "com.apple.voice.compact.ja-JP.Kyoko", "com.apple.ttsbundle.siri_Hattori_ja-JP_compact", "com.apple.voice.compact.ja-JP.Otoya", "com.apple.ttsbundle.siri_O-Ren_ja-JP_compact", "com.apple.voice.compact.ko-KR.Yuna", "com.apple.voice.compact.ms-MY.Amira", "com.apple.voice.compact.nb-NO.Nora", "com.apple.voice.compact.nl-BE.Ellen", "com.apple.voice.compact.nl-NL.Xander", "com.apple.voice.compact.pl-PL.Zosia", "com.apple.eloquence.pt-BR.Reed", "com.apple.voice.compact.pt-BR.Luciana", "com.apple.eloquence.pt-BR.Shelley", "com.apple.eloquence.pt-BR.Grandma", "com.apple.eloquence.pt-BR.Grandpa", "com.apple.eloquence.pt-BR.Rocko", "com.apple.eloquence.pt-BR.Flo", "com.apple.eloquence.pt-BR.Sandy", "com.apple.eloquence.pt-BR.Eddy", "com.apple.voice.compact.pt-PT.Joana", "com.apple.voice.compact.ro-RO.Ioana", "com.apple.voice.compact.ru-RU.Milena", "com.apple.voice.compact.sk-SK.Laura", "com.apple.voice.compact.sv-SE.Alva", "com.apple.voice.compact.th-TH.Kanya", "com.apple.voice.compact.tr-TR.Yelda", "com.apple.voice.compact.uk-UA.Lesya", "com.apple.voice.compact.vi-VN.Linh", "com.apple.ttsbundle.siri_Yu-Shu_zh-CN_compact", "com.apple.ttsbundle.siri_Li-Mu_zh-CN_compact", "com.apple.voice.compact.zh-CN.Tingting", "com.apple.ttsbundle.Sin-ji-premium", "com.apple.voice.compact.zh-HK.Sinji", "com.apple.ttsbundle.Mei-Jia-premium", "com.apple.voice.compact.zh-TW.Meijia", "com.apple.speech.synthesis.voice.Alex"}

–macOS 15 (222)
–> {"com.apple.voice.compact.ar-001.Maged", "com.apple.voice.compact.bg-BG.Daria", "com.apple.voice.compact.ca-ES.Montserrat", "com.apple.voice.compact.cs-CZ.Zuzana", "com.apple.voice.compact.da-DK.Sara", "com.apple.eloquence.de-DE.Sandy", "com.apple.eloquence.de-DE.Shelley", "com.apple.ttsbundle.siri_Helena_de-DE_compact", "com.apple.eloquence.de-DE.Grandma", "com.apple.eloquence.de-DE.Grandpa", "com.apple.eloquence.de-DE.Eddy", "com.apple.eloquence.de-DE.Reed", "com.apple.voice.compact.de-DE.Anna", "com.apple.ttsbundle.siri_Martin_de-DE_compact", "com.apple.eloquence.de-DE.Rocko", "com.apple.eloquence.de-DE.Flo", "com.apple.voice.compact.el-GR.Melina", "com.apple.ttsbundle.siri_Gordon_en-AU_compact", "com.apple.voice.compact.en-AU.Karen", "com.apple.ttsbundle.siri_Catherine_en-AU_compact", "com.apple.voice.premium.en-GB.Malcolm", "com.apple.voice.enhanced.en-GB.Daniel", "com.apple.voice.enhanced.en-GB.Kate", "com.apple.eloquence.en-GB.Rocko", "com.apple.eloquence.en-GB.Shelley", "com.apple.voice.compact.en-GB.Daniel", "com.apple.ttsbundle.siri_Martha_en-GB_compact", "com.apple.eloquence.en-GB.Grandma", "com.apple.eloquence.en-GB.Grandpa", "com.apple.eloquence.en-GB.Flo", "com.apple.eloquence.en-GB.Eddy", "com.apple.eloquence.en-GB.Reed", "com.apple.eloquence.en-GB.Sandy", "com.apple.ttsbundle.siri_Arthur_en-GB_compact", "com.apple.voice.enhanced.en-IE.Moira", "com.apple.voice.compact.en-IE.Moira", "com.apple.voice.compact.en-IN.Rishi", "com.apple.voice.premium.en-US.Zoe", "com.apple.voice.premium.en-US.Ava", "com.apple.voice.enhanced.en-US.Samantha", "com.apple.voice.enhanced.en-US.Evan", "com.apple.ttsbundle.siri_Nicky_en-US_premium", "com.apple.voice.enhanced.en-US.Ava", "com.apple.voice.enhanced.en-US.Zoe", "com.apple.voice.enhanced.en-US.Joelle", "com.apple.voice.enhanced.en-US.Susan", "com.apple.voice.enhanced.en-US.Allison", "com.apple.speech.synthesis.voice.Bruce", "com.apple.voice.enhanced.en-US.Nathan", "com.apple.voice.enhanced.en-US.Tom", "com.apple.speech.synthesis.voice.Agnes", "com.apple.voice.enhanced.en-US.Noelle", "com.apple.eloquence.en-US.Flo", "com.apple.speech.synthesis.voice.Bahh", "com.apple.speech.synthesis.voice.Fred", "com.apple.speech.synthesis.voice.Albert", "com.apple.speech.synthesis.voice.Hysterical", "com.apple.voice.compact.en-US.Allison", "com.apple.speech.synthesis.voice.Organ", "com.apple.speech.synthesis.voice.Cellos", "com.apple.voice.compact.en-US.Evan", "com.apple.speech.synthesis.voice.Zarvox", "com.apple.eloquence.en-US.Rocko", "com.apple.eloquence.en-US.Shelley", "com.apple.speech.synthesis.voice.Princess", "com.apple.eloquence.en-US.Grandma", "com.apple.eloquence.en-US.Eddy", "com.apple.speech.synthesis.voice.Bells", "com.apple.eloquence.en-US.Grandpa", "com.apple.speech.synthesis.voice.Kathy", "com.apple.speech.synthesis.voice.Trinoids", "com.apple.eloquence.en-US.Reed", "com.apple.speech.synthesis.voice.Boing", "com.apple.speech.synthesis.voice.Whisper", "com.apple.speech.synthesis.voice.GoodNews", "com.apple.speech.synthesis.voice.Deranged", "com.apple.ttsbundle.siri_Nicky_en-US_compact", "com.apple.speech.synthesis.voice.BadNews", "com.apple.ttsbundle.siri_Aaron_en-US_compact", "com.apple.speech.synthesis.voice.Bubbles", "com.apple.voice.compact.en-US.Susan", "com.apple.voice.compact.en-US.Tom", "com.apple.voice.compact.en-US.Samantha", "com.apple.eloquence.en-US.Sandy", "com.apple.speech.synthesis.voice.Junior", "com.apple.voice.compact.en-US.Ava", "com.apple.speech.synthesis.voice.Ralph", "com.apple.voice.compact.en-ZA.Tessa", "com.apple.eloquence.es-ES.Shelley", "com.apple.eloquence.es-ES.Grandma", "com.apple.eloquence.es-ES.Rocko", "com.apple.eloquence.es-ES.Grandpa", "com.apple.eloquence.es-ES.Flo", "com.apple.eloquence.es-ES.Sandy", "com.apple.voice.compact.es-ES.Monica", "com.apple.eloquence.es-ES.Eddy", "com.apple.eloquence.es-ES.Reed", "com.apple.eloquence.es-MX.Rocko", "com.apple.voice.compact.es-MX.Paulina", "com.apple.eloquence.es-MX.Flo", "com.apple.eloquence.es-MX.Sandy", "com.apple.eloquence.es-MX.Eddy", "com.apple.eloquence.es-MX.Shelley", "com.apple.eloquence.es-MX.Grandma", "com.apple.eloquence.es-MX.Reed", "com.apple.eloquence.es-MX.Grandpa", "com.apple.eloquence.fi-FI.Shelley", "com.apple.eloquence.fi-FI.Grandma", "com.apple.eloquence.fi-FI.Grandpa", "com.apple.eloquence.fi-FI.Sandy", "com.apple.voice.compact.fi-FI.Satu", "com.apple.eloquence.fi-FI.Eddy", "com.apple.eloquence.fi-FI.Rocko", "com.apple.eloquence.fi-FI.Reed", "com.apple.eloquence.fi-FI.Flo", "com.apple.voice.premium.fr-CA.Amelie", "com.apple.voice.enhanced.fr-CA.Chantal", "com.apple.eloquence.fr-CA.Shelley", "com.apple.eloquence.fr-CA.Grandma", "com.apple.eloquence.fr-CA.Grandpa", "com.apple.eloquence.fr-CA.Rocko", "com.apple.eloquence.fr-CA.Eddy", "com.apple.eloquence.fr-CA.Reed", "com.apple.voice.compact.fr-CA.Amelie", "com.apple.eloquence.fr-CA.Flo", "com.apple.eloquence.fr-CA.Sandy", "com.apple.eloquence.fr-FR.Grandma", "com.apple.eloquence.fr-FR.Flo", "com.apple.eloquence.fr-FR.Rocko", "com.apple.eloquence.fr-FR.Grandpa", "com.apple.eloquence.fr-FR.Sandy", "com.apple.eloquence.fr-FR.Eddy", "com.apple.voice.compact.fr-FR.Thomas", "com.apple.ttsbundle.siri_Dan_fr-FR_compact", "com.apple.eloquence.fr-FR.Jacques", "com.apple.ttsbundle.siri_Marie_fr-FR_compact", "com.apple.eloquence.fr-FR.Shelley", "com.apple.voice.compact.he-IL.Carmit", "com.apple.voice.compact.hi-IN.Lekha", "com.apple.voice.compact.hr-HR.Lana", "com.apple.voice.compact.hu-HU.Mariska", "com.apple.voice.compact.id-ID.Damayanti", "com.apple.eloquence.it-IT.Eddy", "com.apple.eloquence.it-IT.Sandy", "com.apple.eloquence.it-IT.Reed", "com.apple.eloquence.it-IT.Shelley", "com.apple.eloquence.it-IT.Grandma", "com.apple.eloquence.it-IT.Grandpa", "com.apple.eloquence.it-IT.Flo", "com.apple.eloquence.it-IT.Rocko", "com.apple.voice.compact.it-IT.Alice", "com.apple.ttsbundle.siri_Hattori_ja-JP_premium", "com.apple.ttsbundle.siri_O-Ren_ja-JP_premium", "com.apple.voice.enhanced.ja-JP.Kyoko", "com.apple.voice.enhanced.ja-JP.Otoya", "com.apple.eloquence.ja-JP.Eddy", "com.apple.eloquence.ja-JP.Reed", "com.apple.eloquence.ja-JP.Shelley", "com.apple.voice.compact.ja-JP.Kyoko", "com.apple.eloquence.ja-JP.Grandma", "com.apple.eloquence.ja-JP.Rocko", "com.apple.eloquence.ja-JP.Grandpa", "com.apple.ttsbundle.siri_Hattori_ja-JP_compact", "com.apple.voice.compact.ja-JP.Otoya", "com.apple.eloquence.ja-JP.Sandy", "com.apple.ttsbundle.siri_O-Ren_ja-JP_compact", "com.apple.eloquence.ja-JP.Flo", "com.apple.eloquence.ko-KR.Rocko", "com.apple.eloquence.ko-KR.Grandma", "com.apple.eloquence.ko-KR.Grandpa", "com.apple.eloquence.ko-KR.Eddy", "com.apple.eloquence.ko-KR.Sandy", "com.apple.voice.compact.ko-KR.Yuna", "com.apple.eloquence.ko-KR.Reed", "com.apple.eloquence.ko-KR.Flo", "com.apple.eloquence.ko-KR.Shelley", "com.apple.voice.compact.ms-MY.Amira", "com.apple.voice.compact.nb-NO.Nora", "com.apple.voice.compact.nl-BE.Ellen", "com.apple.voice.compact.nl-NL.Xander", "com.apple.voice.compact.pl-PL.Zosia", "com.apple.eloquence.pt-BR.Reed", "com.apple.voice.compact.pt-BR.Luciana", "com.apple.eloquence.pt-BR.Shelley", "com.apple.eloquence.pt-BR.Grandma", "com.apple.eloquence.pt-BR.Grandpa", "com.apple.eloquence.pt-BR.Rocko", "com.apple.eloquence.pt-BR.Flo", "com.apple.eloquence.pt-BR.Sandy", "com.apple.eloquence.pt-BR.Eddy", "com.apple.voice.compact.pt-PT.Joana", "com.apple.voice.compact.ro-RO.Ioana", "com.apple.voice.compact.ru-RU.Milena", "com.apple.voice.compact.sk-SK.Laura", "com.apple.voice.compact.sl-SI.Tina", "com.apple.voice.compact.sv-SE.Alva", "com.apple.voice.compact.th-TH.Kanya", "com.apple.voice.compact.tr-TR.Yelda", "com.apple.voice.compact.uk-UA.Lesya", "com.apple.voice.compact.vi-VN.Linh", "com.apple.eloquence.zh-CN.Eddy", "com.apple.eloquence.zh-CN.Shelley", "com.apple.ttsbundle.siri_Yu-Shu_zh-CN_compact", "com.apple.eloquence.zh-CN.Grandma", "com.apple.eloquence.zh-CN.Reed", "com.apple.eloquence.zh-CN.Grandpa", "com.apple.eloquence.zh-CN.Rocko", "com.apple.ttsbundle.siri_Li-Mu_zh-CN_compact", "com.apple.eloquence.zh-CN.Flo", "com.apple.voice.compact.zh-CN.Tingting", "com.apple.eloquence.zh-CN.Sandy", "com.apple.voice.compact.zh-HK.Sinji", "com.apple.eloquence.zh-TW.Shelley", "com.apple.eloquence.zh-TW.Grandma", "com.apple.eloquence.zh-TW.Grandpa", "com.apple.eloquence.zh-TW.Sandy", "com.apple.eloquence.zh-TW.Flo", "com.apple.eloquence.zh-TW.Eddy", "com.apple.eloquence.zh-TW.Reed", "com.apple.voice.compact.zh-TW.Meijia", "com.apple.eloquence.zh-TW.Rocko", "com.apple.speech.synthesis.voice.Alex"}

★Click Here to Open This Script 

Posted in Text to Speech | Tagged 15.0savvy | Leave a comment

macOS 13 TTS環境の変化について

Posted on 11月 22, 2023 by Takaaki Naganoya

正確にいえばmacOS 12あたりから大幅に変わっていて、忙しさにかまけて詳細な調査は行なってこなかったのですが……必要に迫られていろいろ調べてみました(Piyomaru Context Menu Assistant関連で)。

変更点1:TTS Voiceから年齢(Age)という属性値が削除された

変更点2:TTS Voiceの名前(Name)という属性値がローカライズされて返るようになった

変更点3:AppleScriptのsayコマンドにTTS Voiceの名前(Name)を設定するとエラーになるものが多数出てきた

変更点4:TTS Voice低音質キャラクタと高音質キャラクタ(Premium、Enhanced)があった場合に、AppleScriptのsayコマンドで低音質キャラクタが指定されるようになった。高音質キャラクタ(Premium、Enhanced)を明示的に指定する方法はない(sayコマンドでは不可能。AVSpeechSynthesizerを呼び出して読み上げるのは可能)

変更点5:AppleScriptのsayコマンドでSiriボイスを指定できないが、システムのデフォルト読み上げ設定にSiri音声を指定していると、sayコマンドでTTSボイス無指定(システムのデフォルト設定ボイスを使用)でSiri音声で読み上げられる

変更点1は、ポリコレの一環なんでしょうか。別に機械音声なので「年齢を基準に処理するな」とかいう不満は誰も抱かないと思います。これを決定した責任者は、頭がおかしいです。

変更点2は、NameのほかにLocalizedNameとかいった属性値を持たせるべきだったんじゃないでしょうか。経験の足りないエンジニアがとりあえずやっつけで仕事をしてしまったように見えます。これを決めた責任者は頭がおかしいです。バカと言って差し支えないでしょう。

変更点3は、けっこう困ります。AppleScriptのデフォルトコマンドには、TTS Voiceキャラクタ名の一覧を取得する、といった命令が存在せず、なんとなく「システム環境設定に出てくる名前を指定して使ってね」といった無責任な状況になっていました。昔からあるTTS VoiceのうちBells、Hysterical、Organ、Princess、Trinoids、Whisperを指定してもエラーになります(OS内に存在しているのに)。これは、OSの内部が壊れているものと判断しています。いい加減にしてほしいです。

変更点4は、Apple社内がえらく混乱しているように見えます。ちゃんと状況を整理していないような、開発現場のカオスな状況しか感じません。エンジニアの人数はさほど増やしていないのに、OSの数を野放図に増やしすぎです。これは、そういう管理をしている管理職の頭がおかしいです。

変更点5は、おそらくApple側が意図していない動作だと思われますが、「いまだとsayコマンドでAppleScriptからSiriの音声が指定し放題」であるということです。sayコマンドでは音声の指定を行わないと、OSデフォルトの音声キャラクタが指定されます。このデフォルト音声をSiriの音声にしておいて、キャラクタ無指定でsayコマンドを実行すると、Siriの音声キャラクタで発声が行われます。

sayコマンドで指定できるということは、音声ファイルにレンダリング可能だということで、Appleがキャラクタ提供元とそういうライセンス契約をしているのであればよいのですが、バグによって意図しない使われ方をしてしまう可能性があるわけです。ただ、このような状況を作った責任はAppleにあるものであって、ユーザーにはありません。

–> Play TTS by Siri demo

追跡調査を行なってみたところ、shell commandの/usr/bin/sayは、これらの、AppleScriptのsayコマンドではエラーになるText To Speechキャラクタ「Bells」も問題なく指定できました。

Posted in Bug Text to Speech | Tagged 13.0savvy | Leave a comment

macOS 13 TTS Voice環境に変更

Posted on 11月 12, 2022 by Takaaki Naganoya

macOS 13でTTS(Text To Speech)キャラクタが追加され、日本語環境では各種読み上げ機能で「O-Ren」(女性)と「Hattori」(男性)というSiriの音声が使えるようになりました。

macOS 13の「システム設定」(System Settings.app)の「アクセシビリティ」>「読み上げコンテンツ」で、「システムの声」(TTS読み上げキャラクタ)を選択、追加できるわけですが……

ただし、AppleScriptのsayコマンド(音声読み上げ、音声ファイルへのレンダリング)で、「Hattori」「O-Ren」が使えるというわけではありません。逆に、OSのサービス経由で音声名称を取得すると、

{"Kyoko", "Kyoko(拡張)", "Otoya", "Otoya(拡張)"}

などと結果が返ってくるものの、”Kyoko(拡張)”, “Otoya(拡張)”をsayコマンドで指定するとエラーになります。

say "こんにちは" using "Kyoko(拡張)"
--> AppleScript Execution Error
AppleScript名:TTS Voiceを言語で抽出.scpt
— Created 2017-03-28 by Takaaki Naganoya
— 2017 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set aLoc to (current application’s NSLocale’s currentLocale()’s identifier()) as string
–>  "ja_JP"

set vList to getTTSVoiceNameWithLanguage(aLoc) of me
–>  {"Kyoko", "Otoya"}–macOS 12まで
–>  {"Kyoko", "Kyoko(拡張)", "Otoya", "Otoya(拡張)"}–macOS 13

on getTTSVoiceNameWithLanguage(voiceLang)
  set outArray to current application’s NSMutableArray’s new()
  
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceLocaleIdentifier == %@", voiceLang)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceName") as list
  
  
return aResList
end getTTSVoiceNameWithLanguage

★Click Here to Open This Script 

TTS Voiceの環境に合わせて何かこれらのTTS Voiceのファミリー名称などを返すようにしないとダメなのかも???

AppleScript名:TTS Voiceを言語で抽出 v2.scpt
— Created 2017-03-28 by Takaaki Naganoya
— Modified 2022-11-12 by Takaaki Naganoya
— 2022 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set aLoc to (current application’s NSLocale’s currentLocale()’s identifier()) as string
–>  "ja_JP"

set vList to getTTSVoiceNameWithLanguage(aLoc) of me
–>  {"Kyoko", "Otoya"}–macOS 12まで
–>  {"Kyoko", "Otoya"}–macOS 13

on getTTSVoiceNameWithLanguage(voiceLang)
  set outArray to current application’s NSMutableArray’s new()
  
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceLocaleIdentifier == %@", voiceLang)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceNameRoot") –Voice Rootを取得
  
  
–要素をユニーク化
  
set theSet to current application’s NSOrderedSet’s orderedSetWithArray:aResList
  
return (theSet’s array()) as list
end getTTSVoiceNameWithLanguage

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 13.0savvy | 3 Comments

与えられた自然言語テキストから言語を推測して、指定の性別で、TTSキャラクタを自動選択して読み上げ

Posted on 2月 5, 2022 by Takaaki Naganoya

自然言語テキストを与えると、記述言語を推測して、その言語コード(jaとか)、性別、Premium(高音質)音声かどうか(true/false)をもとにText to Speechの読み上げ音声キャラクタをしぼりこんで、sayコマンドで音声読み上げするAppleScriptです。

# ScriptのリストについているURL Linkを書き換えました

自然言語から推測される言語コードと、TTS音声キャラクタに振られている言語コードの間に仕様的な食い違いがあるので、中国語の自動判定を行うためには、(若干の)処理を追加する必要があります。

自然言語テキストから取得できるのは「簡体字」「繁体字」のコードである一方で、TTS読み上げキャラクタが持っているのは、China、HongKong、Taiwanと国コードなので、対照表でもつけるか、いっそ全部「zh」でくくってランダム選択するか、、、はたまた、実行マシンの緯度・経度情報から判定するか、テーブルを編集可能なようにしておいて、テーブルのルールを決め打ちで反映するとか、、、、


▲システム環境設定>アクセシビリティ>読み上げコンテンツ>システムの声 のポップアップメニューで、一番下に「カスタマイズ」の項目があり、Text To Speech読み上げキャラクタの追加が行える


▲追加したTTSキャラクタの音声データは自動でダウンロードが行われる。TTS用にSiri音声は指定できないが、「ショートカット」の音声読み上げでは指定できる。このあたり、外部のTTS音声データ提供会社との契約によるものなのか、あるいは管理プログラムが異なるのか?

AppleScript名:与えられた自然言語テキストから言語を推測して、指定の性別で、TTSキャラクタを自動選択して読み上げ v1(簡体字、繁体字 未サポート).scptd
—
–  Created by: Takaaki Naganoya
–  Created on: 2022/02/05
—
–  Copyright © 2022 Piyomaru Software, All Rights Reserved
—
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

property NSSpeechSynthesizer : a reference to current application’s NSSpeechSynthesizer

set str1 to "こんにちは"

–指定文字列が何語かを推測して、言語コード(Short)を取得
set a1Res to guessLanguageCodeOf(str1) of me

–指定の言語コード(Short)をキーにしてTTS属性情報を取得
set vList to retAvailableTTSbyShortLangCodeAndSexAndPremium(a1Res, "Female", true) of me
if vList = {} then return

–取得したTTS情報リストから、てきとーに項目を取得
set fV to contents of first item of vList

set vName to VoiceName of fV
say str1 using vName

—
on retAvailableTTSbyShortLangCodeAndSexAndPremium(aLangShortCode as string, aSex as string, premiumFlag as boolean)
  set outList to {}
  
  
if aSex is not in {"Male", "Female"} then error "Sex code is wrong"
  
  
set aList to NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aInfo to (NSSpeechSynthesizer’s attributesForVoice:j)
    
set aInfoRec to aInfo as record
    
    
–読み上げ対象文字データは多すぎるので削除しておく
    
set VoiceIndividuallySpokenCharacters of aInfoRec to {}
    
set VoiceSupportedCharacters of aInfoRec to {}
    
    
set aName to VoiceName of aInfoRec
    
set aLangCode to VoiceLocaleIdentifier of aInfoRec
    
    
set aGender to VoiceGender of aInfoRec
    
set aVID to VoiceIdentifier of aInfoRec
    
    
if (aLangCode starts with aLangShortCode) and (aGender = "VoiceGender" & aSex) then
      if premiumFlag = true then
        if aVID ends with "premium" then
          set the end of outList to aInfoRec
        end if
      else
        set the end of outList to aInfoRec
      end if
    end if
  end repeat
  
  
return outList
end retAvailableTTSbyShortLangCodeAndSexAndPremium

–文字列から言語を推測して言語名を返す
on guessLanguageOf(theString)
  set theTagger to current application’s NSLinguisticTagger’s alloc()’s initWithTagSchemes:{current application’s NSLinguisticTagSchemeLanguage} options:0
  
theTagger’s setString:theString
  
set languageID to theTagger’s tagAtIndex:0 |scheme|:(current application’s NSLinguisticTagSchemeLanguage) tokenRange:(missing value) sentenceRange:(missing value)
  
return ((current application’s NSLocale’s localeWithLocaleIdentifier:"en")’s localizedStringForLanguageCode:languageID) as text
end guessLanguageOf

–文字列から言語を推測して言語コードを返す
on guessLanguageCodeOf(theString)
  set theTagger to current application’s NSLinguisticTagger’s alloc()’s initWithTagSchemes:{current application’s NSLinguisticTagSchemeLanguage} options:0
  
theTagger’s setString:theString
  
set languageID to theTagger’s tagAtIndex:0 |scheme|:(current application’s NSLinguisticTagSchemeLanguage) tokenRange:(missing value) sentenceRange:(missing value)
  
return languageID as text
end guessLanguageCodeOf

★Click Here to Open This Script 

Posted in Language Text Text to Speech | Tagged 10.15savvy 11.0savvy 12.0savvy NSLinguisticTagger NSLocale NSSpeechSynthesizer | Leave a comment

指定TTSボイスキャラクタの読み上げ例文テキストを取得

Posted on 7月 6, 2020 by Takaaki Naganoya

指定のテキスト読み上げ(Text To Speech)ボイスキャラクターの読み上げ例文テキストを取得して実際に読み上げるAppleScriptです。

TTS音声は言語や性別、年齢、高音質かどうかなどの情報を持っているので、これらを指定して絞り込むことが可能です。また、指定TTS音声キャラクターの例文テキストもこのように取得できます。

AppleScript名:指定TTSボイスキャラクタの読み上げ例文テキストを取得.scpt
— Created 2015-08-25 by Takaaki Naganoya
— Modified 2015-08-26 by Shane Stanley, Takaaki Naganoya
— Modified 2020-07-06 by Takaaki Naganoya
— 2020 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"
use scripting additions

set vList to getVoiceNames() of me
using terms from scripting additions
  set aTargTTSVoiceName to contents of (choose from list vList)
end using terms from

using terms from scripting additions
  set v1Res to getDemoText(aTargTTSVoiceName) of me
  
say v1Res using aTargTTSVoiceName
end using terms from

–Get TTS Voice sample text
on getDemoText(aName as string)
  set vList to getVoiceNames() of me
  
if aName is not in vList then return ""
  
set anID to getSpecifiedVoiceIDfromVoiceName(aName) of me
  
  
set aDemoText to ((current application’s NSSpeechSynthesizer’s attributesForVoice:anID)’s VoiceDemoText)
  
return aDemoText as string
end getDemoText

–Get all voice names
on getVoiceNames()
  –Make Blank Array
  
set outArray to current application’s NSMutableArray’s arrayWithObject:{}
  
set aList to {}
  
  
–Make Installed Voice List
  
set nameList to current application’s NSSpeechSynthesizer’s availableVoices()
  
repeat with i in nameList
    set j to contents of i
    
    
set aDic to ((current application’s NSSpeechSynthesizer’s attributesForVoice:j))
    
    
set aDemoText to (aDic’s VoiceDemoText) as string
    
set aName to (aDic’s VoiceName) as string
    
    
set the end of aList to aName
  end repeat
  
  
return aList as list
end getVoiceNames

–Voice Name –> Voice ID
on getSpecifiedVoiceIDfromVoiceName(VoiceName as string)
  set outArray to current application’s NSMutableArray’s arrayWithObject:{}
  
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDic to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDic)
  end repeat
  
  
–Filter Voice
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceName == %@", VoiceName)
  
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aReList to (filteredArray’s valueForKey:"VoiceIdentifier") as list
  
  
if length of aReList = 1 then
    return first item of aReList
  else
    return ""
  end if
end getSpecifiedVoiceIDfromVoiceName

★Click Here to Open This Script 

Posted in list Record System Text to Speech | Tagged 10.13savvy 10.14savvy 10.15savvy 11.0savvy NSMutableArray NSPredicate NSSpeechSynthesizer | Leave a comment

mapboxSpeech Sample

Posted on 9月 27, 2019 by Takaaki Naganoya

mapboxが提供しているMapBoxSpeechフレームワークを呼び出すAppleScriptです。

mapboxは地図系の各種機能をWeb APIで提供しています。その一環としてGithub上でNatural-sounding text-to-speech Frameworkを提供しており、これをAppleScriptから呼び出してみました。

Github上のサンプルコードでは、AppleScript(Xcode上のプロジェクト)から呼び出すサンプルも掲載されているのですが、通常のScript EditorやScript Debugger上から呼び出す方法は掲載されていませんでした(しかも、サンプルそのままだと動く気配がないんですが、、、)。

# Xcode上で作成したアプリケーションだと、MapboxSpeech.Frameworkが、組み込んだアプリケーション側のInfo.plist内の指定のエントリに書かれているAccess tokenを読み込んでREST APIにアクセスできるとか。単体でAppleScriptからFrameworkを呼び出すような使い方は想定していなかったようです>サンプル

そこで、実際にmapboxにサインアップして、API Key(というか、プロジェクト単位でのToken)を取得、実際にGithub上で公開されているフレームワークをmacOS用にビルドし、通常のAppleScriptから呼び出してみました。

以下のアプレットは実行すると実際に日本語サンプル(たぶん)の文章を読み上げてくれます。実際に本Script(アプレットではなく)をAppleScriptとしてご自分の環境で動かすためには、Frameworkをインストールし、Script Debugger上で動かす(macOS 10.14以降)か、SIPを解除した環境(macOS 10.14以降)でスクリプトエディタ上で実行することになります。その際には、mapboxのWebサイト上でサインアップしてご自分のAccess tokenを取得してください。サインアップすると公開Access tokenを取得できますが、そちらではなく個別のプロジェクトをWebサイト上で作成して(Test AppleScriptとか)、そちらのAccess tokenを利用してください。

–> Download MapboxSpeech.framework(To ~/Library/Frameworks/)

–> mapboxSpeech Sample Run-Only(Code-Signed Executable Applet with Framework)

……で、実際にサンプル文章を読み上げてみたところ、英文なのに英文っぽくない日本語みたいな発音で、ちょっと「ナニコレ?」と思ってしまいましたが、冗談半分で日本語テキストをパラメータに指定してみたら、ちゃんと日本語を読み上げるのでビビりました。

この手のWebサービスで日本語対応はしておらず、英語+ヨーロッパの数カ国語のみサポートというのが普通です。

OS標準のsayコマンドよりも形態素解析が弱いようなので、文節ごとに読点(、)を入れてあげる必要はありますが、それでも日本語のテキストを読み上げてしまうのにはちょっと驚かされました。

AppleScript名:mapboxSpeech Sample.scptd
—
–  Created by: Takaaki Naganoya
–  Created on: 2019/09/27
—
–  Copyright © 2019 Piyomaru Software, All Rights Reserved
—
use AppleScript version "2.4" — Yosemite (10.10) or later
use framework "Foundation"
use framework "MapboxSpeech" –https://github.com/mapbox/mapbox-speech-swift
use framework "AVFoundation"
use scripting additions

on run
  set theOptions to current application’s MBSpeechOptions’s alloc()
  
theOptions’s initWithText:"こんにちは、私の名前は、「ながのや」 です。"
  
  
set speechSynthesizer to current application’s MBSpeechSynthesizer’s alloc()’s initWithAccessToken:"xx.xxX.X_XXxxxxxXXXXXxXxXXxxX"
  
set theURL to speechSynthesizer’s URLForSynthesizingSpeechWithOptions:theOptions
  
set theData to the current application’s NSData’s dataWithContentsOfURL:theURL
  
  
set aAudioPlayer to current application’s AVAudioPlayer’s alloc()’s initWithData:theData |error|:(missing value)
  
aAudioPlayer’s prepareToPlay()
  
aAudioPlayer’s setNumberOfLoops:0
  
aAudioPlayer’s setDelegate:me
  
aAudioPlayer’s play()
end run

–音楽再生の終了のDelegate Methodを取得
on audioPlayerDidFinishPlaying:anAudioplayer successfully:aFlag
  tell current application to quit
end audioPlayerDidFinishPlaying:successfully:

★Click Here to Open This Script 

Posted in REST API Sound Text to Speech | Tagged 10.12savvy 10.13savvy 10.14savvy | Leave a comment

TTSで日本語数値読み上げ

Posted on 9月 26, 2019 by Takaaki Naganoya

桁数の大きな数値のText To Speech(TTS)読み上げのAppleScriptです。

# 日本語環境における枝葉的な(マニアックな)数値表現の仕様に関する話であり、他の言語では関係ない話です。漢字文化圏で使われる数値表現のようですが、たとえば中国や韓国で使われている数値桁表現との間で厳密な互換性があるかといった確認は行なっておりません

実行前に日本語TTS読み上げ音声の「Otoya」あるいは「Kyoko」をインストールしておいてください。

まず、AppleScriptが指数表示なしに表現できる数値は10^9程度で、それを超えると指数表示になります。

ただし、指数表示になった数値を数値文字列に変換するノウハウは全世界的に共有されており、そのためのサブルーチン(Stringify)を呼び出すだけで済みます。

AppleScriptのsayコマンドでは「100兆」までの読み上げはそれっぽく実行してくれますが、「1000兆」になると読み上げ内容が数字の羅列になってしまいます。

これについても、大きな数値を日本語数値エンコーディング文字列に変換するサブルーチンを昔から公開しており(本Blog開設当初の11年前に掲載)、それを呼び出すだけで日本語数値表現文字列に変換できるため、読み上げられることでしょう。

Number Japanese English
1 一(いち) one
10 十(じゅう) ten
100 百(ひゃく) hundred
1000 千(せん) thousand
10000 万(まん) 10 thousand
100000 十万(じゅうまん) 100 thousand
1000000 百万(ひゃくまん) million
10000000 千万(せんまん) 10 million
100000000 億(おく) 100million
1000000000000 兆(ちょう) villion
1000000000000000 京(けい) thousand villion
100000000000000000 100京(ひゃっけい) trillion
10^24 丈(じょ)
10^28 穣(じょう)
10^52 恒河沙(ごうがしゃ)
10^56 阿僧祇(あそうぎ)
10^60 那由他(なゆた)
10^64 不可思議(ふかしぎ)
10^68 無量大数(むりょうたいすう)

とはいえ、「阿僧祇(あそうぎ、10^56)「那由多(なゆた、10^60)」といった数値桁を読み上げさせるとTTSが正しく読み上げてくれません。さすがにこんなにマニアックな数値表現はカバーしなくてよいでしょう。「丈(じょ)」「穣(じょう)」など似た音の桁が存在するあたり、これらは口に出して読み上げるものではなく、文字で読むためだけのものだと強く感じるものです。

AppleScript名:TTSで日本語数値読み上げ
repeat with i from 0 to 15
  set aNum to (10 ^ i)
  
say Stringify(aNum) of me using "Kyoko" –or "Otoya"
end repeat

on Stringify(x) — for E+ numbers
  set x to x as string
  
set {tids, AppleScript’s text item delimiters} to {AppleScript’s text item delimiters, {"E+"}}
  
if (count (text items of x)) = 1 then
    set AppleScript’s text item delimiters to {tids}
    
return x
  else
    set {n, z} to {text item 1 of x, (text item 2 of x) as integer}
    
set AppleScript’s text item delimiters to {tids}
    
set i to character 1 of n
    
set decSepChar to character 2 of n — "." or ","
    
set d to text 3 thru -1 of n
    
set l to count d
    
if l > z then
      return (i & (text 1 thru z of d) & decSepChar & (text (z + 1) thru -1 of d))
    else
      repeat (z – l) times
        set d to d & "0"
      end repeat
      
return (i & d)
    end if
  end if
end Stringify

★Click Here to Open This Script 

AppleScript名:TTSで日本語数値読み上げ v2
repeat with i from 15 to 70
  set aNum to (10 ^ i)
  
set jRes to encodeJapaneseNumText(aNum) of japaneseNumberEncodingKit
  
say jRes using "Kyoko" –or "Otoya"
end repeat

–課題:オーバーフローチェックを行っていない
set a to "102320120000108220010"
set jRes to encodeJapaneseNumText(a) of japaneseNumberEncodingKit
–> "1垓232京120兆1億822万10"

script japaneseNumberEncodingKit
  –数字文字列を日本語数値表現文字列に変換
  
on encodeJapaneseNumText(aNum)
    
    
set aText to Stringify(aNum) of me
    
set aText to aText as Unicode text
    
set dotText to "." as Unicode text
    
set upperDigit to ""
    
set lowerDigit to ""
    
    
–小数点の処理
    
if dotText is in aText then
      set b to offset of dotText in aText
      
set upperDigit to characters 1 thru (b – 1) of aText
      
set upperDigit to upperDigit as Unicode text
      
set lowerDigit to characters b thru -1 of aText
      
set lowerDigit to lowerDigit as Unicode text
    else
      set upperDigit to aText
    end if
    
    
    
set scaleList3 to {"", "万", "億", "兆", "京", "垓", "丈", "壌", "溝", "砂", "正", "載", "極", "恒河沙", "阿僧梢", "那由他", "不可思議", "無量大数"}
    
set splitDigit to 4
    
set nList to splitByDigit(upperDigit, splitDigit) of me
    
set nList to reverse of nList
    
    
set resText to ""
    
set digCount to 1
    
repeat with i in nList
      set b to (contents of i) as number
      
if b is not equal to 0 then
        set resText to (b as text) & item digCount of scaleList3 & resText
      end if
      
set digCount to digCount + 1
    end repeat
    
    
    
    
return resText & lowerDigit
    
  end encodeJapaneseNumText
  
  
–指定桁数で区切る
  
on splitByDigit(a, splitDigit)
    set aList to characters of a
    
set aList to reverse of aList
    
log aList
    
set resList to {}
    
set tempT to ""
    
set tempC to 1
    
repeat with i in aList
      set tempT to contents of i & tempT
      
if tempC mod splitDigit = 0 then
        set resList to {tempT} & resList
        
set tempT to ""
      end if
      
set tempC to tempC + 1
    end repeat
    
    
if tempT is not equal to "" then
      set resList to {tempT} & resList
    end if
    
    
resList
    
  end splitByDigit
  
  
  
  
on Stringify(x) — for E+ numbers
    set x to x as string
    
set {tids, AppleScript’s text item delimiters} to {AppleScript’s text item delimiters, {"E+"}}
    
if (count (text items of x)) = 1 then
      set AppleScript’s text item delimiters to tids
      
return x
    else
      set {n, z} to {text item 1 of x, (text item 2 of x) as integer}
      
set AppleScript’s text item delimiters to tids
      
set i to character 1 of n
      
set decSepChar to character 2 of n — "." or ","
      
set d to text 3 thru -1 of n
      
set l to count d
      
if l > z then
        return (i & (text 1 thru z of d) & decSepChar & (text (z + 1) thru -1 of d))
      else
        repeat (z – l) times
          set d to d & "0"
        end repeat
        
return (i & d)
      end if
    end if
  end Stringify
end script

★Click Here to Open This Script 

Posted in Number System Text Text to Speech | Tagged 10.12savvy 10.13savvy 10.14savvy | Leave a comment

テキストをTTSで読み上げて所要時間を算出 v2.1(CotEditor版)

Posted on 2月 6, 2018 by Takaaki Naganoya

指定のテキストをTTS(Text To Speech)音声で読み上げ、読み上げ所要時間を計算するAppleScriptです。

実時間をかけて読み上げるのではなく、ファイルに対して音声レンダリングしたデータを書き込む動作をsayコマンドで行うため、読み上げ実時間よりも短い時間で動作を完了します。

読み上げ速度を遅いパターンと速いパターンで個別に音声レンダリングして所要時間のシミュレーションを行ったり、Keynoteのプレゼン資料のすべてのテキスト要素を読み上げ所要時間のシミュレーションを行なって、資料の枚数が多すぎるとか少なすぎるといった判断を行なっています(与えられた時間よりも多い資料を発表することはできないので)。

とくに、プレゼン発表に不慣れな人は発表資料のページを多くしがち&1ページあたりの文字数を多くしがちなので、sayコマンドによる読み上げシミュレーションで「この発表時間に対して要素が多すぎるのでは?」といった話をします。

AppleScript名:テキストをTTSで読み上げて所要時間を算出 v2.1(CotEditor版)
— Created 2018-01-10 by Takaaki Naganoya
— 2018 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AVFoundation"
use framework "AppKit"
–http://piyocast.com/as/archives/5113

property |NSURL| : a reference to current application’s |NSURL|
property NSDate : a reference to current application’s NSDate
property NSUUID : a reference to current application’s NSUUID
property NSFileManager : a reference to current application’s NSFileManager
property AVAudioPlayer : a reference to current application’s AVAudioPlayer
property NSDateFormatter : a reference to current application’s NSDateFormatter
property NSSpeechSynthesizer : a reference to current application’s NSSpeechSynthesizer

set str3 to getEditorText() of me
if str3 = false then return

set aVoice to "Kyoko"

–Check existence of TTS Voice name
set vList to retAvailableTTSnames() of me
if aVoice is not in vList then error "Wrong TTS Voice Name"

set d1 to readTextByTTSVoiceAndReturnDuration(str3, aVoice, 180) of me –aSpeedRate is "Words per minute. 180 to 220"
set d2 to readTextByTTSVoiceAndReturnDuration(str3, aVoice, 220) of me

set outStr to (formatHMS(d1) of me & "/180 words per minute") & return & (formatHMS(d2) of me & "/220 words per minute") & return
tell application "CotEditor"
  activate
  
write to console outStr
end tell

on readTextByTTSVoiceAndReturnDuration(aStr as string, aVoice as string, aSpeedRate as integer)
  set aUUID to NSUUID’s UUID()’s UUIDString() as string
  
–set aPath to (((path to temporary items from user domain) as string) & aUUID & ".aif")
  
set aPath to (((path to desktop) as string) & aUUID & ".aif")
  
set aPOSIX to POSIX path of aPath
  
  
tell current application
    say aStr using aVoice saving to (aPOSIX) speaking rate aSpeedRate without waiting until completion
  end tell
  
  
repeat 100000 times
    set aExt to NSFileManager’s defaultManager()’s fileExistsAtPath:aPOSIX
    
if aExt as boolean = true then exit repeat
    
delay "0.001" as real
  end repeat
  
  
if aExt = false then error "No Sound file"
  
  
set aDur to getDuration(aPath as alias) of me
  
try
    do shell script "rm -f " & quoted form of POSIX path of aPath
  end try
  
  
return aDur as real
end readTextByTTSVoiceAndReturnDuration

on getDuration(aFile)
  set aURL to |NSURL|’s fileURLWithPath:(POSIX path of aFile)
  
  
repeat 1000 times
    set aAudioPlayer to AVAudioPlayer’s alloc()’s initWithContentsOfURL:aURL |error|:(missing value)
    
set aRes to aAudioPlayer’s prepareToPlay()
    
if aRes as boolean = true then exit repeat
    
delay 0.5
  end repeat
  
if aRes = false then error "TTS sound output failed"
  
  
set channelCount to aAudioPlayer’s numberOfChannels()
  
set aDuration to aAudioPlayer’s duration()
  
return aDuration as real
end getDuration

on retAvailableTTSnames()
  set outList to {}
  
  
set aList to NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aInfo to (NSSpeechSynthesizer’s attributesForVoice:j)
    
set aInfoRec to aInfo as record
    
set aName to VoiceName of aInfoRec
    
set the end of outList to aName
  end repeat
  
  
return outList
end retAvailableTTSnames

on formatHMS(aTime)
  set aDate to NSDate’s dateWithTimeIntervalSince1970:aTime
  
set aFormatter to NSDateFormatter’s alloc()’s init()
  
  
—This formatter text is localized in Japanese.
  
if aTime < hours then
    aFormatter’s setDateFormat:"mm分ss秒"
  else if aTime < days then
    aFormatter’s setDateFormat:"HH時間mm分ss秒"
  else
    aFormatter’s setDateFormat:"DD日HH時間mm分ss秒"
  end if
  
  
set timeStr to (aFormatter’s stringFromDate:aDate) as string
  
return timeStr
end formatHMS

on getEditorText()
  tell application "CotEditor"
    if (count every document) = 0 then return false
    
tell front document
      return contents
    end tell
  end tell
end getEditorText

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy CotEditor | Leave a comment

システムにインストールされているTTS VoiceのID一覧を取得する

Posted on 2月 6, 2018 by Takaaki Naganoya
AppleScript名:システムにインストールされているTTS VoiceのID一覧を取得する
— Created 2017-03-28 by Takaaki Naganoya
— 2017 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set vList to current application’s NSSpeechSynthesizer’s availableVoices() as list
–>  {​​​​​"com.apple.speech.synthesis.voice.Alex", ​​​​​"com.apple.speech.synthesis.voice.alice", ​​​​​"com.apple.speech.synthesis.voice.allison.premium", ​​​​​"com.apple.speech.synthesis.voice.alva", ​​​​​"com.apple.speech.synthesis.voice.amelie", ​​​​​"com.apple.speech.synthesis.voice.anna.premium", ​​​​​"com.apple.speech.synthesis.voice.audrey.premium", ​​​​​"com.apple.speech.synthesis.voice.ava.premium", ​​​​​"com.apple.speech.synthesis.voice.carmit", ​​​​​"com.apple.speech.synthesis.voice.damayanti", ​​​​​"com.apple.speech.synthesis.voice.daniel.premium", ​​​​​"com.apple.speech.synthesis.voice.diego", ​​​​​"com.apple.speech.synthesis.voice.ellen",……}

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy | Leave a comment

OSにインストールされているTTS Voiceのうち指定言語をサポートするものを返す

Posted on 2月 6, 2018 by Takaaki Naganoya
AppleScript名:OSにインストールされているTTS Voiceのうち指定言語をサポートするものを返す
— Created 2017-03-28 by Takaaki Naganoya
— 2017 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set aLoc to (current application’s NSLocale’s currentLocale()’s identifier()) as string
–>  "ja_JP"

set vList to getTTSVoiceNameWithLanguage(aLoc) of me
–>  {​​​​​"Kyoko", ​​​​​"Otoya"​​​}

set vIDs to getTTSVoiceIDWithLanguage(aLoc) of me
–>  {​​​​​"com.apple.speech.synthesis.voice.kyoko.premium", ​​​​​"com.apple.speech.synthesis.voice.otoya.premium"​​​}

set anID to getTTSVoiceIDWithName("Kyoko") of me
–>  {​​​​​"com.apple.speech.synthesis.voice.kyoko.premium"​​​}

set anAge to getTTSVoiceAgeWithName("Otoya") of me
–>  35

on getTTSVoiceNameWithLanguage(voiceLang)
  set outArray to current application’s NSMutableArray’s arrayWithObject:{}
  
  
–Make Installed Voice List
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceLocaleIdentifier == %@", voiceLang)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceName") as list
  
  
return aResList
end getTTSVoiceNameWithLanguage

on getTTSVoiceIDWithLanguage(voiceLang)
  set outArray to current application’s NSMutableArray’s arrayWithObject:{}
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceLocaleIdentifier == %@", voiceLang)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceIdentifier") as list
  
  
return aResList
end getTTSVoiceIDWithLanguage

on getTTSVoiceIDWithName(voiceName)
  set outArray to current application’s NSMutableArray’s arrayWithObject:{}
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceName == %@", voiceName)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceIdentifier") as list
  
  
return aResList
end getTTSVoiceIDWithName

on getTTSVoiceAgeWithName(voiceName)
  set outArray to current application’s NSMutableArray’s arrayWithObject:{}
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceName == %@", voiceName)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceAge") as list
  
set anItem to first item of aResList
  
if anItem = missing value then return 0
  
return anItem as integer
end getTTSVoiceAgeWithName

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy | Leave a comment

TTS Voiceを言語と性別で抽出

Posted on 2月 6, 2018 by Takaaki Naganoya
AppleScript名:TTS Voiceを言語と性別で抽出
— Created 2017-03-28 by Takaaki Naganoya
— 2017 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set aLoc to (current application’s NSLocale’s currentLocale()’s identifier()) as string
–>  "ja_JP"

set vList to getTTSVoiceNameWithLanguageAndGender(aLoc, "Male") of me
–>  {​​​​​"Otoya"​​​}

set vList to getTTSVoiceNameWithLanguageAndGender(aLoc, "Female") of me
–>  {​​​​​"Kyoko"​​​}

set vList to getTTSVoiceNameWithLanguageAndGender("en_US", "Male") of me
–> {"Alex", "Bruce", "Fred", "Junior", "Ralph", "Tom"}

on getTTSVoiceNameWithLanguageAndGender(voiceLang, aGen)
  if aGen = "Male" then
    set aGender to "VoiceGenderMale"
  else if aGen = "Female" then
    set aGender to "VoiceGenderFemale"
  end if
  
  
set outArray to current application’s NSMutableArray’s new()
  
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceLocaleIdentifier == %@ && VoiceGender== %@", voiceLang, aGender)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceName") as list
  
  
return aResList
end getTTSVoiceNameWithLanguageAndGender

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy | Leave a comment

すべてのTTS VoiceからLocale情報を抽出し、指定LocaleのTTS Voice名を取得

Posted on 2月 6, 2018 by Takaaki Naganoya
AppleScript名:すべてのTTS VoiceからLocale情報を抽出し、指定LocaleのTTS Voice名を取得
— Created 2015-08-25 by Takaaki Naganoya
— Modified 2015-08-26 by Shane Stanley, Takaaki Naganoya
— 2015 Piyomaru Software
use AppleScript version "2.5"
use scripting additions
use framework "Foundation"
use framework "AppKit"

–すべてのTTS VoiceからIdentifier情報を抽出してユニーク化
set v1Res to getLocaleICodeFromTTSVoices()
set vRes to choose from list v1Res with prompt "Select Locale"
set v2Res to getTTSVoiceNameWithLanguage(first item of vRes) of me

on getLocaleICodeFromTTSVoices()
  set aResList to getAttributeFromTTSVoices("VoiceLocaleIdentifier") of me
  
return aResList as list
end getLocaleICodeFromTTSVoices

on getAttributeFromTTSVoices(anAttribute)
  set outArray to current application’s NSMutableArray’s new()
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDict to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDict)
  end repeat
  
  
set aResArray to (outArray’s valueForKey:anAttribute)
  
  
set aSet to current application’s NSMutableSet’s setWithArray:aResArray
  
set aResList to aSet’s allObjects()
  
  
return aResList as list
end getAttributeFromTTSVoices

on getTTSVoiceNameWithLanguage(voiceLang)
  set outArray to current application’s NSMutableArray’s new()
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDIc to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDIc)
  end repeat
  
  
set aPredicate to current application’s NSPredicate’s predicateWithFormat_("VoiceLocaleIdentifier == %@", voiceLang)
  
set filteredArray to outArray’s filteredArrayUsingPredicate:aPredicate
  
set aResList to (filteredArray’s valueForKey:"VoiceName") as list
  
  
return aResList
end getTTSVoiceNameWithLanguage

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy | Leave a comment

すべてのTTS VoiceからLanguage情報を抽出してユニーク化

Posted on 2月 6, 2018 by Takaaki Naganoya
AppleScript名:すべてのTTS VoiceからLanguage情報を抽出してユニーク化
— Created 2015-08-25 by Takaaki Naganoya
— Modified 2015-08-26 by Shane Stanley, Takaaki Naganoya
— 2015 Piyomaru Software
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set v1Res to getLocaleICodeFromTTSVoices()
–>  {​​​​​"fr_FR", ​​​​​"zh_TW", ​​​​​"it_IT", ​​​​​"en_ZA", ​​​​​"es_AR", ​​​​​"ko_KR", ​​​​​"ro_RO", ​​​​​"en_IN", ​​​​​"fr_CA", ​​​​​"hi_IN", ​​​​​"da_DK", ​​​​​"en-scotland", ​​​​​"pt_BR", ​​​​​"zh_CN", ​​​​​"sv_SE", ​​​​​"es_ES", ​​​​​"ar_SA", ​​​​​"hu_HU", ​​​​​"en_GB", ​​​​​"ja_JP", ​​​​​"fi_FI", ​​​​​"zh_HK", ​​​​​"tr_TR", ​​​​​"nb_NO", ​​​​​"pl_PL", ​​​​​"id_ID", ​​​​​"cs_CZ", ​​​​​"el_GR", ​​​​​"he_IL", ​​​​​"ru_RU", ​​​​​"de_DE", ​​​​​"en_AU", ​​​​​"nl_BE", ​​​​​"pt_PT", ​​​​​"th_TH", ​​​​​"sk_SK", ​​​​​"en_US", ​​​​​"en_IE", ​​​​​"nl_NL", ​​​​​"es_MX"​​​}

set v2Res to getLanguageCodeFromTTSVoices()
–> {"nl-NL", "id", "fr-FR", "it-IT", "es-419", "ko-KR", "ro-RO", "fr-CA", "hi-IN", "da-DK", "pt-BR", "sv-SE", "es-ES", "hu-HU", "en-GB", "ja-JP", "fi-FI", "tr-TR", "ar", "nb-NO", "pl-PL", "cs-CZ", "el-GR", "he-IL", "ru-RU", "zh-Hans", "de-DE", "en-AU", "zh-Hant", "nl-BE", "pt-PT", "th-TH", "sk-SK", "en-US", "en-IE"}

on getLanguageCodeFromTTSVoices()
  set aResList to getAttributeFromTTSVoices("VoiceLanguage") of me
  
return aResList as list
end getLanguageCodeFromTTSVoices

on getLocaleICodeFromTTSVoices()
  set aResList to getAttributeFromTTSVoices("VoiceLocaleIdentifier") of me
  
return aResList as list
end getLocaleICodeFromTTSVoices

on getAttributeFromTTSVoices(anAttribute)
  set outArray to current application’s NSMutableArray’s new()
  
set aList to current application’s NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aDict to (current application’s NSSpeechSynthesizer’s attributesForVoice:j)
    (
outArray’s addObject:aDict)
  end repeat
  
  
set aResArray to (outArray’s valueForKey:anAttribute)
  
  
set aSet to current application’s NSMutableSet’s setWithArray:aResArray
  
set aResList to aSet’s allObjects()
  
  
return aResList as list
end getAttributeFromTTSVoices

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy | Leave a comment

TTS Voice名一覧を取得

Posted on 2月 6, 2018 by Takaaki Naganoya
AppleScript名:TTS Voice名一覧を取得
use AppleScript version "2.4"
use scripting additions
use framework "Foundation"
use framework "AppKit"

property NSSpeechSynthesizer : a reference to current application’s NSSpeechSynthesizer

set vList to retAvailableTTSnames() of me
–> {"Agnes", "Albert", "Alex", "Alice", "Allison", "Alva", "Amelie", "Anna", "Audrey", "Ava", "Bad News", "Bahh", "Bells", "Boing", "Bruce", "Bubbles", "Carmit", "Cellos", "Damayanti", "Daniel", "Deranged", "Diego", "Ellen", "Emily", "Fiona", "Fred", "Good News", "Hysterical", "Ioana", "Jill", "Joana", "Jorge", "Juan", "Junior", "Kanya", "Karen", "Kate", "Kathy", "Kyoko", "Laura", "Lee", "Lekha", "Luca", "Luciana", "Maged", "Mariska", "Mei-Jia", "Melina", "Milena", "Moira", "Monica", "Nora", "Otoya", "Paulina", "Pipe Organ", "Princess", "Ralph", "Samantha", "Sara", "Satu", "Serena", "Sin-ji", "Tessa", "Thomas", "Ting-Ting", "Tom", "Trinoids", "Veena", "Vicki", "Victoria", "Whisper", "Xander", "Yelda", "Yuna", "Yuri", "Zarvox", "Zosia", "Zuzana"}

on retAvailableTTSnames()
  set outList to {}
  
  
set aList to NSSpeechSynthesizer’s availableVoices()
  
set bList to aList as list
  
  
repeat with i in bList
    set j to contents of i
    
set aInfo to (NSSpeechSynthesizer’s attributesForVoice:j)
    
set aInfoRec to aInfo as record
    
set aName to VoiceName of aInfoRec
    
set the end of outList to aName
  end repeat
  
  
return outList
end retAvailableTTSnames

★Click Here to Open This Script 

Posted in System Text to Speech | Tagged 10.11savvy 10.12savvy 10.13savvy | Leave a comment

電子書籍(PDF)をオンラインストアで販売中!

Google Search

Popular posts

  • macOS 13.6.5 AS系のバグ、一切直らず
  • 開発機としてM2 Mac miniが来たのでガチレビュー
  • CotEditorで2つの書類の行単位での差分検出
  • Apple純正マウス、キーボードのバッテリー残量取得
  • macOS 15, Sequoia
  • Cocoa-AppleScript Appletランタイムが動かない?
  • ディスプレイをスリープ状態にして処理続行
  • macOS 14の変更がmacOS 13にも反映
  • Finder上で選択中のPDFのページ数を加算
  • 初心者がつまづきやすい「log」コマンド
  • Adobe AcrobatをAppleScriptから操作してPDF圧縮
  • 与えられた文字列の1D Listのすべての順列組み合わせパターン文字列を返す v3(ベンチマーク用)
  • 当分、macOS 14へのアップデートを見送ります
  • macOS 14、英語環境で12時間表記文字と時刻の間に不可視スペースを入れる仕様に
  • macOS 13 TTS環境の変化について
  • 新刊発売 AppleScript最新リファレンス v2.8対応
  • メキシカンハットの描画
  • Pixelmator Pro v3.6.4でAppleScriptからの操作時の挙動に違和感が
  • macOS 14, Sonoma 9月27日にリリース
  • 2023年に書いた価値あるAppleScript

Tags

10.11savvy (1101) 10.12savvy (1242) 10.13savvy (1391) 10.14savvy (586) 10.15savvy (436) 11.0savvy (280) 12.0savvy (200) 13.0savvy (117) 14.0savvy (65) 15.0savvy (32) CotEditor (62) Finder (49) iTunes (19) Keynote (108) NSAlert (60) NSArray (51) NSBezierPath (18) NSBitmapImageRep (20) NSBundle (20) NSButton (34) NSColor (51) NSDictionary (27) NSFileManager (23) NSImage (41) NSJSONSerialization (21) NSMutableArray (62) NSMutableDictionary (21) NSPredicate (36) NSRunningApplication (56) NSScreen (30) NSScrollView (22) NSString (117) NSURL (97) NSURLRequest (23) NSUTF8StringEncoding (30) NSView (33) NSWorkspace (20) Numbers (63) Pages (50) Safari (44) Script Editor (23) WKUserContentController (21) WKUserScript (20) WKWebView (23) WKWebViewConfiguration (22)

カテゴリー

  • 2D Bin Packing
  • 3D
  • AirDrop
  • AirPlay
  • Animation
  • AppleScript Application on Xcode
  • Beginner
  • Benchmark
  • beta
  • Bluetooth
  • Books
  • boolean
  • bounds
  • Bug
  • Calendar
  • call by reference
  • Clipboard
  • Code Sign
  • Color
  • Custom Class
  • dialog
  • diff
  • drive
  • exif
  • file
  • File path
  • filter
  • folder
  • Font
  • Font
  • GAME
  • geolocation
  • GUI
  • GUI Scripting
  • Hex
  • History
  • How To
  • iCloud
  • Icon
  • Image
  • Input Method
  • Internet
  • iOS App
  • JavaScript
  • JSON
  • JXA
  • Keychain
  • Keychain
  • Language
  • Library
  • list
  • Locale
  • Localize
  • Machine Learning
  • Map
  • Markdown
  • Menu
  • Metadata
  • MIDI
  • MIME
  • Natural Language Processing
  • Network
  • news
  • Noification
  • Notarization
  • Number
  • Object control
  • OCR
  • OSA
  • parallel processing
  • PDF
  • Peripheral
  • PRODUCTS
  • QR Code
  • Raw AppleEvent Code
  • Record
  • rectangle
  • recursive call
  • regexp
  • Release
  • Remote Control
  • Require Control-Command-R to run
  • REST API
  • Review
  • RTF
  • Sandbox
  • Screen Saver
  • Script Libraries
  • sdef
  • search
  • Security
  • selection
  • shell script
  • Shortcuts Workflow
  • Sort
  • Sound
  • Spellchecker
  • Spotlight
  • SVG
  • System
  • Tag
  • Telephony
  • Text
  • Text to Speech
  • timezone
  • Tools
  • Update
  • URL
  • UTI
  • Web Contents Control
  • WiFi
  • XML
  • XML-RPC
  • イベント(Event)
  • 未分類

アーカイブ

  • 2024年10月
  • 2024年9月
  • 2024年8月
  • 2024年7月
  • 2024年6月
  • 2024年5月
  • 2024年4月
  • 2024年3月
  • 2024年2月
  • 2024年1月
  • 2023年12月
  • 2023年11月
  • 2023年10月
  • 2023年9月
  • 2023年8月
  • 2023年7月
  • 2023年6月
  • 2023年5月
  • 2023年4月
  • 2023年3月
  • 2023年2月
  • 2023年1月
  • 2022年12月
  • 2022年11月
  • 2022年10月
  • 2022年9月
  • 2022年8月
  • 2022年7月
  • 2022年6月
  • 2022年5月
  • 2022年4月
  • 2022年3月
  • 2022年2月
  • 2022年1月
  • 2021年12月
  • 2021年11月
  • 2021年10月
  • 2021年9月
  • 2021年8月
  • 2021年7月
  • 2021年6月
  • 2021年5月
  • 2021年4月
  • 2021年3月
  • 2021年2月
  • 2021年1月
  • 2020年12月
  • 2020年11月
  • 2020年10月
  • 2020年9月
  • 2020年8月
  • 2020年7月
  • 2020年6月
  • 2020年5月
  • 2020年4月
  • 2020年3月
  • 2020年2月
  • 2020年1月
  • 2019年12月
  • 2019年11月
  • 2019年10月
  • 2019年9月
  • 2019年8月
  • 2019年7月
  • 2019年6月
  • 2019年5月
  • 2019年4月
  • 2019年3月
  • 2019年2月
  • 2019年1月
  • 2018年12月
  • 2018年11月
  • 2018年10月
  • 2018年9月
  • 2018年8月
  • 2018年7月
  • 2018年6月
  • 2018年5月
  • 2018年4月
  • 2018年3月
  • 2018年2月

https://piyomarusoft.booth.pm/items/301502

メタ情報

  • ログイン
  • 投稿フィード
  • コメントフィード
  • WordPress.org

Forum Posts

  • 人気のトピック
  • 返信がないトピック

メタ情報

  • ログイン
  • 投稿フィード
  • コメントフィード
  • WordPress.org
Proudly powered by WordPress
Theme: Flint by Star Verte LLC