Home Patent Forecast® Sectors Log In   Contact  
How it works Patent Forecast® Sectors Insights
Menu
Enjoy your FREE PREVIEW which shows only 2022 data and 25 documents. Contact Patent Forecast for full access.        

Consumer Sleep Technology

Search All Applications in Consumer Sleep Technology


Application US20190246936


Published 2019-08-15

System And Method For Associating Music With Brain-state Data

A system and method may be provided for associating bio-signal data (e.g. EEG brain scan data) from at least one user with at least one music data item (e.g. song, or piece of music). By associating bio-signal data, or emotions determined therefrom, with music, the system may establish a data store of music associated with emotions. That database may then be leveraged upon determining that a user is feeling a particular emotion through an EEG scan. When a particular emotion is detected in EEG data of a user, the system may then respond based at least partly on the same or similar emotion being associated with one or more music data items in the system. For example, the system may recommend a particular song associated with the same emotion presently being experienced by the user.



Much More than Average Length Specification


View the Patent Matrix® Diagram to Explore the Claim Relationships

USPTO Full Text Publication >

1 Independent Claim

  • 1. A music system comprising: (a) at least one bio-signal sensor configured to capture bio-signal sensor data from at least one user; (b) an input receiver configured to receive music data and the bio-signal sensor data, the music data and the bio-signal sensor data being temporally defined such that the music data corresponds temporally to at least a portion of the bio-signal sensor data; (c) at least one processor configured to provide: (i) a music processor to segment the music data into a plurality of time epochs of music, each epoch of music linked to a time stamp; (ii) a sonic feature extractor to, for each epoch of music, extract a set of sonic features; (iii) a biological feature extractor to extract, for each epoch of music, a set of biological features from the bio-signal sensor data using the time stamp for the respective epoch of music; (iv) a metadata extractor to extract metadata from the music data; (v) a user feature extractor to extract a set of user attributes from the music data and the bio-signal sensor data; (vi) a machine learning engine to transform the set of sonic features, the set of biological features, the set of metadata, and the set of user attributes into, for each epoch of music, a set of categories that the respective epoch belongs to using one or more predictive models to predict a user reaction to music; and (d) a music recommendation engine configured to provide at least one music recommendation based on the sets of categories.