Can access to SoundAnalysis (sound classifier built into next version of MacOS, iOS, WatchOS) be provided to my app running in the background on iPhone or Apple Watch?
I want to monitor local sounds from Apple Watch and iPhones and take remote action for out of band data (ie. send alert to caregiver if coughing rate is too high, or if someone is knocking on the door for more than a minute, etc.)
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
in iOS 15, on stopSpeaking of AVSpeechSynthesizer,
didFinish delegate method getting called instead of didCancel which is working fine in iOS 14 and below version.
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset).
I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully.
But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax.
Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND.
Does anyone faced this issue?
if I execute this builder.inspect_layers(last=4)
I get this output
[Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND)
Updatable: False
Input blobs: ['sequential/dense_1/MatMul']
Output blobs: ['Identity']
[Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul)
Updatable: False
Input blobs: ['sequential/dense/Relu']
Output blobs: ['sequential/dense_1/MatMul']
[Id: 30], Name: sequential/dense/Relu (Type: activation)
Updatable: False
Input blobs: ['sequential/dense/MatMul']
Output blobs: ['sequential/dense/Relu']
In the make_updatable function when I execute
builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity')
I get this error
ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
Hi everyone, I might need some help with on-device recognition. It seems that the speech recognition task will discard whatever it has transcribed after a new sentence starts (or it believes it becomes a new sentence) during a single audio session, with requiresOnDeviceRecognition is set to true.
This doesn't happen with requiresOnDeviceRecognition set to false.
System environment: macOS 14 with Xcode 15, deploying to iOS 17
Thank you all!
Hello,
I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case.
TL;DR
The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16.
Longer description
The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic).
iOS 16 - iPhone SE 3rd Gen (A15 Bioinc)
iOS 16 uses the ANE and results in fast prediction, load and compilation times.
iOS 17 - iPhone 13 Pro (A15 Bionic)
iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower.
Code To Reproduce
The following is my code I'm using to export my PyTorch vision model (using coremltools).
I've used the same code for the past few months with sensational results on iOS 16.
# Convert to Core ML using the Unified Conversion API
coreml_model = ct.convert(
model=traced_model,
inputs=[image_input],
outputs=[ct.TensorType(name="output")],
classifier_config=ct.ClassifierConfig(class_names),
convert_to="neuralnetwork",
# compute_precision=ct.precision.FLOAT16,
compute_units=ct.ComputeUnit.ALL
)
System environment:
Xcode version: 15.0
coremltools version: 7.0.0
OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode)
Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0
Additional context
This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16
If anyone has a similar experience, I'd love to hear more.
Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know.
Thank you!
Xcode 15.3 AppIntentsSSUTraining warning: missing the definition of locale # variables.1.definitions
Hello!
I've noticed that adding localizations for AppShortcuts triggers the following warnings in Xcode 15.3:
warning: missing the definition of zh-Hans # variables.1.definitions
warning: missing the definition of zh-Hans # variables.2.definitions
This occurs with both legacy strings files and String Catalogs.
Example project: https://github.com/gongzhang/AppShortcutsLocalizationWarningExample
i'm trying to create an NLModel within a MessageFilterExtension handler.
The code works fine in the main app, but when I try to use it in the extension it fails to initialize. Just this doesn't even work and gets the error below.
Single line that fails.
SMS_Classifier is the class xcode generated for my model. This line works fine in the main app.
let mlModel = try SMS_Classifier(configuration: MLModelConfiguration()).model
Error
Unable to locate Asset for contextual word embedding model for local en.
MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "initialization of text classifier model with model data failed" UserInfo={NSLocalizedDescription=initialization of text classifier model with model data failed}
Any ideas?
Hi everyone !
I'm getting random crashes when I'm using the Speech Recognizer functionality in my app.
This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes.
Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library).
Here is my code and also the crash log for it.
Code:
func startRecording() {
startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal)
if UserDefaults.standard.bool(forKey: Constants.darkTheme) {
commentTextView.textColor = .white
} else {
commentTextView.textColor = .black
}
commentTextView.isUserInteractionEnabled = false
recordingLabel.text = Constants.recording
if recognitionTask != nil {
recognitionTask?.cancel()
recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.record)
try audioSession.setMode(AVAudioSession.Mode.measurement)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
showAlertWithTitle(message: Constants.error)
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
let inputNode = audioEngine.inputNode
guard let recognitionRequest = recognitionRequest else {
fatalError(Constants.error)
}
recognitionRequest.shouldReportPartialResults = true
recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
var isFinal = false
if result != nil {
self.commentTextView.text = result?.bestTranscription.formattedString
isFinal = (result?.isFinal)!
}
if error != nil || isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
self.startStopRecordBtn.isEnabled = true
}
})
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE
self?.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
showAlertWithTitle(message: Constants.error)
}
}
Here is the crash log:
Thanks for very much for reading this !
Does the new Image Playground API allow programmatically generating images? Can the app generate and use them without the API's UI or would that require using another generative image model?
Topic:
Machine Learning & AI
SubTopic:
General
The Translation API introduced at Session 10117 is impressive, but limiting it to SwiftUI is restrictive.
This API works great in the demo, but for more complex apps, it lacks flexibility because it is bound to SwiftUI Views.
Please consider making it available in non-SwiftUI environments.
Topic:
Machine Learning & AI
SubTopic:
General
iOS 18 App Intents while supporting iOS 17
Hello,
I have an existing app that supports iOS 17. I already have three App Intents but would like to add some of the new iOS 18 app intents like ShowInAppSearchResultsIntent.
However, I am having a hard time using #available or @available to limit this ShowInAppSearchResultsIntent to iOS 18 only while still supporting iOS 17.
Obviously, the ShowInAppSearchResultsIntent needs to use @AssistantIntent which is iOS 18 only, so I mark that struct as @available(iOS 18, *). That works as expected. It is when I need to add this "SearchSnippetIntent" intent to the AppShortcutsProvider, that I begin to have trouble doing. See code below:
struct SnippetsShortcutsAppShortcutsProvider: AppShortcutsProvider {
@AppShortcutsBuilder
static var appShortcuts: [AppShortcut] {
//iOS 17+
AppShortcut(intent: SnippetsNewSnippetShortcutsAppIntent(), phrases: [
"Create a New Snippet in \(.applicationName) Studio",
], shortTitle: "New Snippet", systemImageName: "rectangle.fill.on.rectangle.angled.fill")
AppShortcut(intent: SnippetsNewLanguageShortcutsAppIntent(), phrases: [
"Create a New Language in \(.applicationName) Studio",
], shortTitle: "New Language", systemImageName: "curlybraces")
AppShortcut(intent: SnippetsNewTagShortcutsAppIntent(), phrases: [
"Create a New Tag in \(.applicationName) Studio",
], shortTitle: "New Tag", systemImageName: "tag.fill")
//iOS 18 Only
AppShortcut(intent: SearchSnippetIntent(), phrases: [
"Search \(.applicationName) Studio",
"Search \(.applicationName)"
], shortTitle: "Search", systemImageName: "magnifyingglass")
}
let shortcutTileColor: ShortcutTileColor = .blue
}
The iOS 18 Only AppShortcut shows the following error but none of the options seem to work. Maybe I am going about it the wrong way.
'SearchSnippetIntent' is only available in iOS 18 or newer
Add 'if #available' version check
Add @available attribute to enclosing static property
Add @available attribute to enclosing struct
Thanks in advance for your help.
The documentation for translationTask(source:target:action:) says it should translate when content appears, but this isn't happening. I’m only able to translate when I manually associate that task with a configuration, and instantiate the configuration.
Here’s the complete source code:
import SwiftUI
import Translation
struct ContentView: View {
@State private var originalText = "The orange fox jumps over the lazy dog"
@State private var translationTaskResult = ""
@State private var translationTaskResult2 = ""
@State private var configuration: TranslationSession.Configuration?
var body: some View {
List {
// THIS DOES NOT WORK
Section {
Text(translationTaskResult)
.translationTask { session in
Task { @MainActor in
do {
let response = try await session.translate(originalText)
translationTaskResult = response.targetText
} catch { print(error) }
}
}
}
// THIS WORKS
Section {
Text(translationTaskResult2)
.translationTask(configuration) { session in
Task { @MainActor in
do {
let response = try await session.translate(originalText)
translationTaskResult2 = response.targetText
} catch { print(error) }
}
}
Button(action: {
if configuration == nil {
configuration = TranslationSession.Configuration()
return
}
configuration?.invalidate()
}) { Text("Translate") }
}
}
}
}
How can I automatically translate a given text when it appears using the new translationTask API?
Topic:
Machine Learning & AI
SubTopic:
General
Adding the openAppWhenRun property to an AppIntent for a ControlWidgetButton causes the following error when the control is tapped in Control Center:
Unknown NSError The operation couldn’t be completed. (LNActionExecutorErrorDomain error 2018.)
Here’s the full ControlWidget and AppIntent code that causes the errorerror:
Should controls be able to open apps after the AppIntent runs, or is this a bug?
Hi,
I am working on creating a EntityPropertyQuery for my App entity. I want the user to be able to use Shortcuts to search by a property in a related entity, but I'm struggling with how the syntax for that looks.
I know the documentation for 'EntityPropertyQuery' suggests that this should be possible with a different initializer for the 'QueryProperty' that takes in a 'entityProvider' but I can't figure out how it works.
For e.g. my CJPersonAppEntity has 'emails', which is of type CJEmailAppEntity, which has a property 'emailAddress'. I want the user to be able to find the 'person' by looking up an email address.
When I try to provide this as a Property to filter by, inside CJPersonAppEntityQuery, but I get a syntax error:
static var properties = QueryProperties {
Property(\CJPersonEmailAppEntity.$emailAddress, entityProvider: { person in
person.emails // error
}) {
EqualToComparator { NSPredicate(format: "emailAddress == %@", $0) }
ContainsComparator { NSPredicate(format: "emailAddress CONTAINS %@", $0) }
}
}
The error says "Cannot convert value of type '[CJPersonEmailAppEntity]' to closure result type 'CJPersonEmailAppEntity'"
So it's not expecting an array, but an individual email item. But how do I provide that without running the predicate query that's specified in the closure?
So I tried something like this , just returning something without worrying about correctness:
Property(\CJPersonEmailAppEntity.$emailAddress, entityProvider: { person in
person.emails.first ?? CJPersonEmailAppEntity() // satisfy compiler
}) {
EqualToComparator { NSPredicate(format: "emailAddress == %@", $0) }
ContainsComparator { NSPredicate(format: "emailAddress CONTAINS %@", $0) }
}
and it built the app, but failed on another the step 'Extracting app intents metadata':
error: Entity CJPersonAppEntity does not contain a property named emailAddress. Ensure that the property is wrapped with an @Property property wrapper
So I'm not sure what the correct syntax for handling this case is, and I can't find any other examples of how it's done. Would love some feedback for this.
Hi there,I’m a Computer Science student and I have a MacBook Pro 2019 and I’m thinking in buying a new Mac either a Mac Studio or a MacBook Pro but I want to use it for ML.
I’m now doing a segmentation model and I’m wondering if I could use Core Ml or the Apple Neural Engine in the new M3 chips to train it, I’m now using colab and tensorflow to create the model but it’s not doing the job, I’m falling short of Cuda memory.
Thanks :)
Topic:
Machine Learning & AI
SubTopic:
General
Hi, all.
I've been writing various computational functions using Metal.
However, in the following operation functions, unlike + and *, there is an accuracy issue in the / operation.
This is a function that divides a matrix of shape [n, x, y] and a scalar [1].
When compared to numpy or torch, if I change the operator of the above function to * or + instead of /, I can get completely the same results, but in the case of /, there is a difference in the mean of more than 1e-5.
(For reference, this was written with reference to the metal kernel code in llama.cpp)
kernel void kernel_div_single_f16(
device const half * src0,
device const half * src1,
device half * dst,
constant int64_t & ne00,
constant int64_t & ne01,
constant int64_t & ne02,
constant int64_t & ne03,
uint3 tgpig[[threadgroup_position_in_grid]],
uint3 tpitg[[thread_position_in_threadgroup]],
uint3 ntg[[threads_per_threadgroup]]) {
const int64_t i03 = tgpig.z;
const int64_t i02 = tgpig.y;
const int64_t i01 = tgpig.x;
const uint offset = i03*ne02*ne01*ne00 + i02*ne01*ne00 + i01*ne00;
for (int i0 = tpitg.x; i0 < ne00; i0 += ntg.x) {
dst[offset + i0] = src0[offset+i0] / *src1;
}
}
My mac book is,
Macbork Pro(16, 2021) / macOS 12.5 / Apple M1 Pro.
Are there any issues related to Div? Thanks in advance for your reply.
Controls' actions use App Intents in iOS 18.
However, when executing App Intents from Controls, alert dialogs such as IntentDialog can't be used.
This makes it unclear how to display errors when trying to create Controls that do not launch the app.
Is there any way to handle this?
"Last year, I upgraded to an M2 Max laptop, expecting that tensorflow-metal would facilitate effective local prototyping utilizing the Apple Silicon's capabilities.
It has been quite some time since tensorflow-metal was last updated, and there appear to be several unresolved issues noted by the community here. I've personally observed the following behavior with my setup:
Without tensorflow-metal:
import tensorflow as tf
for _ in range(10):
print(tf.random.normal((3,)).numpy())
[-1.4213976 0.08230731 -1.1260201 ]
[ 1.2913705 -0.47693467 -1.2886043 ]
[ 0.09144169 -1.0892165 0.9313669 ]
[ 1.1081179 0.9865657 -1.0298151]
[ 0.03328908 -0.00655857 -0.02662632]
[-1.002391 -1.1873596 -1.1168724]
[-1.2135247 -1.2823236 -1.0396363]
[-0.03492929 -0.9228362 0.19147137]
[-0.59353966 0.502279 0.80000925]
[-0.82247525 -0.13076428 0.99579334]
With tensorflow-metal:
import tensorflow as tf
for _ in range(10):
print(tf.random.normal((3,)).numpy())
[ 1.0031303 0.8095635 -0.0610961]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
Given these observations, it seems there may be an issue with the randomness of tf.random.normal when using tensorflow-metal.
My current setup includes MacOS 14.5, tensorflow 2.14.1, and tensorflow-macos 2.14.1. I am interested in understanding if there are known solutions or workarounds for this behavior.
Furthermore, could anyone provide an update on whether tensorflow-metal is still being actively developed, or if alternative approaches are recommended for utilizing the GPU capabilities of this hardware?
My application is being supported by both iOS and iPadOS platforms. I would like to add AppIntents only on iOS and not on iPadOS. I have not found any blogs related to it. Is it possible? If so, May I know how can we do it?
If not, what is the best practice to avoid showing Siri shortcuts on iPad.