Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

53 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Apple AI / Data Protection & Processing
Where does the processing power to enact certain AI capabilities come from? Is it hosted on the originating device? Or does the device send contents of originating information to Apple assets to process and give product to end user? e.g. If I ask AI to summarize an email will it send the contents of the email to an Apple AI asset to process it and give the summary to the originating device.
0
0
560
Sep ’24
Error in TensorFlow in MacBook Air M1 (macOS Monterey)
getting this error again and again even if I tried reinstalling. Traceback (most recent call last): File "", line 1, in File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in _ll.load_library(_plugin_dir) File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
1
0
942
Sep ’24
Idea's to improve Apple Watch
Dear Apple Team, I have a suggestion to enhance the Apple Watch user experience. A new feature could provide personalized recommendations based on weather conditions and the user’s mood. For example, during hot weather, it could suggest drink something cold, or if the user feeling down,it could offer ways the boost their mood. This kind of feature could make the Apple Watch not just a health and fitness tracker but also a more functional personal assistant. “İmprove communication with Apple Watch” Feature #1: Noise detection and location suggestions. Imagine having your Apple Watch detect ambient noise levels and suggest a quieter location for your call. Feature #2: Context-aware call response options. If you can't answer a call, your Apple Watch could offer pre-set responses to communicate your status and reduce missed call anxiety. For example, if you're in a busy restaurant, your Apple Watch could suggest moving to a quieter spot nearby for a better conversation. Or if you’re in a movie theater,your Apple Watch could send an automatic “I’m in the movie’s” text to the caller. “İmprove user experience and app management“ Automated Sleep Notifications: The ability for the Apple Watch to automatically turn off notifications or change the watch face when the user is sleeping would provide a more seamless experience. For instance,when the watch detects that the user is in sleep mode,it could enable Do Not Disturb to silence calls and alerts. Caller Notification: In addition,it would be great if the Apple Watch could inform callers that the user is currently sleeping. This could help manage expectations for those attempting to reach the user at night. App Management to Conserve Battery: Implementing a feature that detects draining app's when the user is asleep could further enhance battery life. The watch and the iphone could close or pause apps that are using significant power while the user is not active. I believe these features could provide valuable advancements in enhancing the Apple Watch's usability for those who prioritize a restful night's sleep. Thank you for considering my suggestions. Best regards, Mahmut Ötgen Istanbul,Turkey
1
0
1.3k
Aug ’24
CoreML Crash on iOS18 Beta5
Hello, My App works well on iOS17 and previous iOS18 Beta version, while it crashes on latest iOS18 Beta5, when it calling model predictionFromFeatures. Calling stack of crash is as: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority MLANEExecutionPriority_Unspecified' Last Exception Backtrace: 0 CoreFoundation 0x000000019bd6408c __exceptionPreprocess + 164 1 libobjc.A.dylib 0x000000019906b2e4 objc_exception_throw + 88 2 CoreFoundation 0x000000019be5f648 -[NSException initWithCoder:] 3 CoreML 0x00000001b7507340 -[MLE5ExecutionStream _setANEExecutionPriorityWithOptions:] + 248 4 CoreML 0x00000001b7508374 -[MLE5ExecutionStream _prepareForInputFeatures:options:error:] + 248 5 CoreML 0x00000001b7507ddc -[MLE5ExecutionStream executeForInputFeatures:options:error:] + 68 6 CoreML 0x00000001b74ce5c4 -[MLE5Engine _predictionFromFeatures:stream:options:error:] + 80 7 CoreML 0x00000001b74ce7fc -[MLE5Engine _predictionFromFeatures:options:error:] + 208 8 CoreML 0x00000001b74cf110 -[MLE5Engine _predictionFromFeatures:usingState:options:error:] + 400 9 CoreML 0x00000001b74cf270 -[MLE5Engine predictionFromFeatures:options:error:] + 96 10 CoreML 0x00000001b74ab264 -[MLDelegateModel _predictionFromFeatures:usingState:options:error:] + 684 11 CoreML 0x00000001b70991bc -[MLDelegateModel predictionFromFeatures:options:error:] + 124 And my model file type is ml package file. Source code is as below: //model MLModel *_model; ...... // model init MLModelConfiguration* config = [[MLModelConfiguration alloc]init]; config.computeUnits = MLComputeUnitsCPUAndNeuralEngine; _model = [MLModel modelWithContentsOfURL:compileUrl configuration:config error:&error]; ..... // model prediction MLPredictionOptions *option = [[MLPredictionOptions alloc]init]; id<MLFeatureProvider> outFeatures = [_model predictionFromFeatures:_modelInput options:option error:&error]; Is there anything wrong? Any advice would be appreciated.
3
1
874
Aug ’24
Chat gpt audio in background
Dear Apple Development Team, I’m writing to express my concerns and request a feature enhancement regarding the ChatGPT app for iOS. Currently, the app's audio functionality does not work when the app is in the background. This limitation significantly affects the user experience, particularly for those of us who rely on the app for ongoing, interactive voice conversations. Given that many apps, particularly media and streaming services, are allowed to continue audio playback when minimized, it’s frustrating that the ChatGPT app cannot do the same. This restriction interrupts the flow of conversation, forcing users to stay within the app to maintain an audio connection. For users who multitask on their iPhones, being able to switch between apps while continuing to listen or interact with ChatGPT is essential. The ability to reference notes, browse the web, or even respond to messages while maintaining an ongoing conversation with ChatGPT would greatly enhance the app’s usability and align it with other background-capable apps. I understand that Apple prioritizes resource management and device performance, but I believe there’s a strong case for allowing apps like ChatGPT to operate with background audio. Given its growing importance as a tool for productivity, learning, and communication, adding this capability would provide significant value to users. I hope you will consider this feedback for future updates to iOS, or provide guidance on any existing APIs that could be leveraged to enable such functionality. Thank you for your time and consideration. Best regards, luke yes I used gpt to write this.
1
0
1.1k
Aug ’24
xcrun: error: cannot be used within an App Sandbox.
Helpppp. I installed Krita from the Appstore, it works. Then install ai_diffusion and I got : xcrun: error: cannot be used within an App Sandbox. Can anybody help me? Thanks. AttributeError Python 3.10.7: /Applications/krita.app/Contents/MacOS/krita Sat Aug 3 18:15:59 2024 A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred. /Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py in update_settings(self=&lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt;, key='prompt_translation', value=None) 345 self._layout_language_button() 346 elif key == "prompt_translation": 347 self._update_language() 348 349 async def _replace_with_translation(self, client: Client): self = &lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt; self._update_language = &gt; /Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py in _update_language(self=&lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt;) 381 enabled = self._root._model.translation_enabled 382 lang = settings.prompt_translation if enabled else "en" 383 self._language_button.setText(lang.upper()) 384 if enabled: 385 text = self._lang_help_enabled self = &lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt; self._language_button = &lt;PyQt5.QtWidgets.QToolButton object&gt; self._language_button.setText = lang = None lang.upper undefined AttributeError: 'NoneType' object has no attribute 'upper' cause = None class = &lt;class 'AttributeError'&gt; context = None delattr = &lt;method-wrapper 'delattr' of AttributeError object&gt; dict = {} dir = doc = 'Attribute not found.' eq = &lt;method-wrapper 'eq' of AttributeError object&gt; format = ge = &lt;method-wrapper 'ge' of AttributeError object&gt; getattribute = &lt;method-wrapper 'getattribute' of AttributeError object&gt; gt = &lt;method-wrapper 'gt' of AttributeError object&gt; hash = &lt;method-wrapper 'hash' of AttributeError object&gt; init = &lt;method-wrapper 'init' of AttributeError object&gt; init_subclass = le = &lt;method-wrapper 'le' of AttributeError object&gt; lt = &lt;method-wrapper 'lt' of AttributeError object&gt; ne = &lt;method-wrapper 'ne' of AttributeError object&gt; new = reduce = reduce_ex = repr = &lt;method-wrapper 'repr' of AttributeError object&gt; setattr = &lt;method-wrapper 'setattr' of AttributeError object&gt; setstate = sizeof = str = &lt;method-wrapper 'str' of AttributeError object&gt; subclasshook = suppress_context = False traceback = args = ("'NoneType' object has no attribute 'upper'",) name = 'upper' obj = None with_traceback = The above is a description of an error in a Python program. Here is the original traceback: Traceback (most recent call last): File "/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py", line 347, in update_settings self._update_language() File "/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py", line 383, in _update_language self._language_button.setText(lang.upper()) AttributeError: 'NoneType' object has no attribute 'upper'
1
0
786
Aug ’24
CreateMl Hand Pose Classifier Preview not showing the Prediction result
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing. How do I fix this please? Thanks.
1
0
726
Jul ’24
Use iPad M1 processor as GPU
Hello, I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary. I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
2
0
1.1k
Sep ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
2.0k
Jul ’24
Loading CoreML model increases app size?
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
6
3
1.7k
Aug ’24
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
1
1
1.7k
Mar ’25