spotifyios 版怎样ios 视频横屏播放不了

We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out
to see our visuals such as G-Force and Aeon.
Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.
Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio?
The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).
Thanks in advance!
解决方案 Subclass SPTCoreAudioController and do one of two things:
Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.
Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.
Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.
I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project , and the visualiser stuff is in . The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.
本文地址: &
目前,我们正在寻找利用我们这一直是很多年到iOS应用程序,通过新的iOS SDK的Spotify播放音乐的音乐可视化软件 - 检查出的看到我们的视觉效果,如G力和永旺总之,我们在Spotify的iOS版SDK中的示范项目全部运行起来,事情看起来不错,但前进的重要一步是获得接入音频PCM所以我们可以发送到我们的视觉引擎等。难道一个Spotify的开发还是有人在好心知道什么建议可行的办法来获得PCM音频的举行?所述音频PCM块可以作为最新样本的几千循环缓冲器(即我们会使用FFT等)一样简单。 在此先感谢! 解决方案 子类 SPTCoreAudioController ,做一两件事情: 覆盖 connectOutputBus:ofNode:toInputBus:ofNode:inGraph:错误:,并使用 AudioUnitAddRenderNotify()为渲染回调添加到 destinationNode 的音频单元。作为输出节点呈现回调将被调用,将让您使用音频,因为它是离开的扬声器。一旦你这样做,一定要叫超级的实施Spotify的iOS版SDK的音频管道正常工作。
覆盖 attemptToDeliverAudioFrames:ofCount:streamDescription:。这使得因为它是由图书馆制作您访问PCM数据。但是,有一些缓冲在默认管道回事所以在这个回调中给出的数据可能高达背后发生了什么事情传到扬声器半秒,所以我推荐使用建议1这个了。呼叫超级这里继续使用默认的管道。 一旦你有你的自定义音频控制器,初始化一个 SPTAudioStreamingController 与它和你应该是好去。 我实际使用建议1实现iTunes的API Visualiser的在与CocoaLibSpotify建我的Mac OS X客户端的Spotify。它不工作100%顺利(我觉得我做错了什么与runloops和东西),但它带动G力和怀特卡普pretty好。你可以找到项目和Visualiser的东西是在的。在CocoaLibSpotify音频控制器类和项目本质上是一样的人在新的iOS SDK。
本文地址: &
扫一扫关注官方微信推荐这篇日记的豆列
&&&&&&&&&&&&4被浏览1,878分享邀请回答该回答已被折叠 折叠原因:瓦力识别-答非所问02 条评论分享收藏感谢收起小米和微软将在云计算、AI 和硬件产品上有进一步合作
timeagotimeago
Image credit: SaveiOS 版 Spotify 迎来随机播放的免费音乐串流服务
2014 年 1 月 9 日, 中午 11:01
继 Android 版之后,iOS 版的
也于今日迎来了随机播放的免费音乐串流服务。iPhone 用户现在无需付费就能以随机模式欣赏所有歌手的作品(每次选中一位后会随机播放其歌曲),iPad 用户仍将能自由选择。怎么样?准备好升级之后迎接 Spotify 为你带来的惊喜了吗?
引用来源:标签:
timestamptimestamp
timeagotimeago
Signal Messenger 获 WhatsApp 创办人投资半亿美元
Signal 基金会希望推出多个更能保护隐私的产品。作者: , 13 小时前储存timestamptimestamp
2 月 13 日
timeagotimeago
2018 年 2 月 13 日, 晚上 10:01
微软让普通用户也能方便预览 Windows 应用
不一定非得是 Insider 喔。作者: ,
2018 年 2 月 13 日, 晚上 10:01
储存timestamptimestamp
2 月 12 日
timeagotimeago
2018 年 2 月 12 日, 下午 05:00
WhatsApp 在印度试行电子支付服务
整合了印度官方的支付平台,所以有极高泛用性。作者: ,
2018 年 2 月 12 日, 下午 05:00
储存Email订阅& 2018 Oath Inc. 版权所有。

我要回帖

更多关于 ios 视频横屏播放不了 的文章

 

随机推荐