Recent Changes - Search:

Home

Course Info

2012 Class Notes

2012 Students

Student Groups

..........................

2009 Students

Student Groups

..........................

2006 Class

2006 Faculty

2006 Evaluations

Lecture Notes

2006 Students

Student HWs

API Groups

Student Groups

Opportunities

Wiki Docs

edit SideBar

GroupFour

Group Name GroupFour
Members
Amit Prakash
Kyewook Lee

GroupFour

MicroAPI

Conversion API -> Distance data to Sound data

  • Brief idea
    • We get the serialized number data same as the distance data and try to convert to sound data. The distance data will be scaled and then refined for meaningful sound data. The output can be either the file or the live data. The distance data can be passed across the Sound API to generate the sound.

'Sound Alert system for Robots'

Concept

Sick laser gets real time distance data. The distance data to be converted into sound data. Based on the sound data a good sound alert will be generated.



Sound Micro API

bool GetInputStream() Get data from standard input. bool GetInputStream(FILE infile) Get data from file bool GetInputStream(int InputArr[]) Get data from sick laser

bool initDtoS() Set as a default. Degree „³ 180, Resol „³ 1, unit „³ m , efficet range „³ 5 m

bool InitDtoS( int Degree, int Resol, int unit, int eff_range ) Degree is 100 or 180 and 180 is default. Resolution in degree is 1,0.5,0.25. Default is 1. Unit is m,cm,mm and default is m eff_range : limit of max distance range.

bool initSEvent () Set default delay „³ 30 ms. Delimiter „³ FALSE

bool initSEvent (int delay,bool delim ) Microsecond. It is for delay in between each sound. Delimiter : If it¡¦s on , play violin sound in between each distance sounds.

ConvertDtoS () Using the input stream, map and refine from the distance data. Output is Sound data represented by integer array. The mapping ratio is 1:10

SoundPlayEvent () Play piano by the output of convertDtoS.

API's ¡V PLAYER SERVER

The selected APIs are compatible as for the system to generate sound must have some sound data. And based on the data gathered from Sick Laser we will convert it into sound data. So, for conversion we will be required to have API to convert Distance data into sound data.

API's ¡VSICK LASER

Sick Laser Range finder is based on time of flight measurements. It works by sending out laser to find out the distance by scanning. Laser Range data covers the field that is 0 degree ~ 180 degree or 40 degree ~ 140 degree . The angular resolution can be configured to 1 or 0.5 or 0.25 degree depending on the range. The measuring range can be 8 to 80 meters.

API's -JAVA Sound API

Java Sound API provides support for the capture, processing, and rendering of audio and MIDI data The engine can begin playback as soon as it begins to receive sample data or MIDI commands, it can be used with streaming data system.

DATA CONVERSION Mechanism

By SICK Laser we have an option, to get maximum of 720 points data. On the other hand, data have effective range. If the distance is infinite then we can not map this distance to the sound. So we define a limit and if it is over the limit then we map the highest number that we have. Low pitch sound is near, high pitch sound is far. Sound range is in between 0 and 200. Rescaling the input data to the sound data. Return data are 1/10 of input data.


  • Demo for Sound Alert System for Robots Using Sound API

Tried to create the sound file from the sick laser distance data file.

Conversion mechanism:

  • Considered the two formats of the data gathered by sick laser i.e. distance data in m. and the direction data in degrees.
  • Based on circumstances the data will vary and so will the data in sound file will change.
  • Corresponding to the two format we play the piano and the violin sound.

Code:

/* Changing the distance data to the sound data */ private void DistRefine() {
        float f = 0;

        int j = 0 ;

        for (int i = 0 ; i < 180 ; i ++ ) {
            System.out.printf("[f %f ]",fallDistData[i]);
            if ( i != 0 && ( i % 10 ) == 0 ) {
                playSound[j] = (int) ( ( f /10 ) * THRESHOLD )  ;

                System.out.println(playSound[j]) ;
                System.out.println(i) ;
                f = 0 ;
                j ++ ;
            }
            f += fallDistData[i] ;

        }
    }

    /* Play the sound using the JavaSound API */
    public void createShortEvent(int type, int num) {
        ShortMessage myMsg = new ShortMessage();
        // Start playing the note Middle C (60),
        // moderately loud (velocity = 93).
        try {
            myMsg.setMessage(ShortMessage.NOTE_ON, 0, num, 93);
            long timeStamp = -1;
            Receiver	 rcvr = MidiSystem.getReceiver();
            rcvr.send(myMsg, timeStamp);
        } catch (Exception ex) { ex.printStackTrace(); }

        /*
        ShortMessage message = new ShortMessage();
        try {
            long millis = System.currentTimeMillis() - startTime;
            long tick = millis * sequence.getResolution() / 500;
            message.setMessage(type+cc.num, num, cc.velocity);
            MidiEvent event = new MidiEvent(message, tick);
            track.add(event);
        } catch (Exception ex) { ex.printStackTrace(); }
         */
    }

API's - Player Server, Sound API and API for converting the distance data into sound data.

The selected APIs are compatible as for the system to generate sound must have some sound data. And based on the data gathered from Sick Laser we will convert it into sound data. SO, for conversion we will be required to have API to convert Distance data into sound data.


  • Using player server and client API, acquire data from the sick laser.
  • Sick Laser.
    • Sick Laser Range finder is based on time of flight measurements. It works by sending out laser to find out the distance by scanning. Laser Range data covers the field that is 0 degree ~ 180 degree or 40 degree ~ 140 degree . The angular resolution can be configured to 1 or 0.5 or 0.25 degree depending on the range. The measuring range can be 8 to 80 meters.
  • JAVA Sound API
    • Java Sound API provides support for the capture, processing, and rendering of audio and MIDI data
    • The engine can begin playback as soon as it begins to receive sample data or MIDI commands, it can be used with streaming data systems.
    • The JAVA Sound API consists of four packages:
      • javax.media.sound.sampled
      • javax.media.sound.midi
      • javax.media.sound.sampled.spi
      • javax.media.sound.midi.spi

The Java Sound API engine supports Apple Interchange File Format (AIFF), Audio Utility (AU), Wave (WAV), Music Interchange, Data Interchange (MIDI)(type 0 and 1) and Rich MIDI Format (RMF) file formats. It also supports any sound data source that can be expressed as a data stream of sampled data in 8- or 16-bit chunks, mono or stereo, at sample rates from 8 to 48 kHz. A new feature is the support of a-law and u-law compressed data formats. The MIDI synthesizer supports wavetable synthesis that programmers can access by loading the programmable sound bank. The software mixer of Java Sound API can mix up to 64 channels of sampled or synthesized audio.


Interconnection API's


The plans for the next step is to Convert


The problem seems to be in converting the distance data into sound data.



Reading Sound Files

The AudioSystem class provides two types of file-reading services:

Information about the format of the audio data stored in the sound file A stream of formatted audio data that can be read from the sound file

The first of these is given by three variants of the getAudioFileFormat method:

static AudioFileFormat getAudioFileFormat

    (java.io.File   file) 

static AudioFileFormat

    getAudioFileFormat(java.io.InputStream stream) 

static AudioFileFormat getAudioFileFormat

    (java.net.URL url) 

As mentioned above, the returned AudioFileFormat object tells you the file type, the length of the data in the file, encoding, the byte order, the number of channels, the sampling rate, and the number of bits per sample.

The second type of file-reading functionality is given by these AudioSystem methods:

static AudioInputStream getAudioInputStream

    (java.io.File  file) 

static AudioInputStream getAudioInputStream

    (java.net.URL url) 

static AudioInputStream getAudioInputStream

    (java.io.InputStream stream)

These methods give you an object (an AudioInputStream) that lets you read the file's audio data, using one of the read methods of AudioInputStream. We'll see an example momentarily.

Suppose you're writing a sound-editing application that allows the user to load sound data from a file, display a corresponding waveform or spectrogram, edit the sound, play back the edited data, and save the result in a new file. Or perhaps your program will read the data stored in a file, apply some kind of signal processing (such as an algorithm that slows the sound down without changing its pitch), and then play the processed audio. In either case, you need to get access to the data contained in the audio file. Assuming that your program provides some means for the user to select or specify an input sound file, reading that file's audio data involves three steps:

Get an AudioInputStream object from the file. Create a byte array in which you'll store successive chunks of data from the file. Repeatedly read bytes from the audio input stream into the array. On each iteration, do something useful with the bytes in the array (for example, you might play them, filter them, analyze them, display them, or write them to another file).

The following code example outlines these steps.

int totalFramesRead = 0; File fileIn = new File(somePathName); // somePathName is a pre-existing string whose value was // based on a user selection. try {

  AudioInputStream audioInputStream = 
    AudioSystem.getAudioInputStream(fileIn);
  int bytesPerFrame = 
    audioInputStream.getFormat().getFrameSize();
  // Set an arbitrary buffer size of 1024 frames.
  int numBytes = 1024 * bytesPerFrame; 
  byte[] audioBytes = new byte[numBytes];
  try {
    int numBytesRead = 0;
    int numFramesRead = 0;
    // Try to read numBytes bytes from the file.
    while ((numBytesRead = 
      audioInputStream.read(audioBytes)) != -1) {
      // Calculate the number of frames actually read.
      numFramesRead = numBytesRead / bytesPerFrame;
      totalFramesRead += numFramesRead;
      // Here, do something useful with the audio data that's 
      // now in the audioBytes array...
    }
  } catch (Exception ex) { 
    // Handle the error...
  }

} catch (Exception e) {

  // Handle the error...

}


Writing Sound Files

The following AudioSystem method creates a disk file of a specified file type. The file will contain the audio data that's in the specified AudioInputStream:

static int write(AudioInputStream in,

  AudioFileFormat.Type fileType, File out)

Note that the second argument must be one of the file types supported by the system (for example, AU, AIFF, or WAV), otherwise the write method will throw an IllegalArgumentException. To avoid this, you can test whether or not a particular AudioInputStream may be written to a particular type of file, by invoking this AudioSystem method: static boolean isFileTypeSupported

  (AudioFileFormat.Type fileType, AudioInputStream stream)

which will return true only if the particular combination is supported.

More generally, you can learn what types of file the system can write by invoking one of these AudioSystem methods:

static AudioFileFormat.Type[] getAudioFileTypes() static AudioFileFormat.Type[]

  getAudioFileTypes(AudioInputStream stream) 

The first of these returns all the types of file that the system can write, and the second returns only those that the system can write from the given audio input stream.

The following excerpt demonstrates one technique for creating an output file from an AudioInputStream using the write method mentioned above.

File fileOut = new File(someNewPathName);

AudioFileFormat.Type fileType = fileFormat.getType();

if (AudioSystem.isFileTypeSupported(fileType,

    audioInputStream)) {
  AudioSystem.write(audioInputStream, fileType, fileOut);

}

The first statement above, creates a new File object, fileOut, with a user- or program-specified pathname. The second statement gets a file type from a pre-existing AudioFileFormat object called fileFormat, which might have been obtained from another sound file, such as the one that was read in Reading Sound Files above. (You could instead supply whatever supported file type you want, instead of getting the file type from elsewhere. For example, you might delete the second statement and replace the other two occurrences of fileType in the code above with AudioFileFormat.Type.WAVE.)

The third statement tests whether a file of the designated type can be written from a desired AudioInputStream. Like the file format, this stream might have been derived from the sound file previously read. (If so, presumably you've processed or altered its data in some way, because otherwise there are easier ways to simply copy a file.) Or perhaps the stream contains bytes that have been freshly captured from the microphone input.

Finally, the stream, file type, and output file are passed to the AudioSystem.write method, to accomplish the goal of writing the file.



Previous Idea

Human Face Scanner Using Sick Laser.

Concept

  • Human face recognition is very difficult matter.
  • Using sick laser, we can acquire the distance data from the human face and refine meaningful data.
  • Comparison and match.

API's -- Player and Open GL

  • Build physical scanning system.
    • Sliding sick laser and get 2-D data.
  • Using player server and client API, acquire data from the sick laser.
  • Build human face database.
    • Acquireing data will be N * N array data.
    • Database should fast.-- Still we can not decide.
      • Linked list or MySql will be used.
  • Using Open GL, make a window and analysis.
    • Display 2D distance data convert to the 2D gray scale data.

Interconnection API's

"Main"

Demonstration Code

  • Player client API's

/* player client create */

	client = playerc_client_create(NULL, "localhost", 6665);

	if (playerc_client_connect(client) != 0)
	{
		printf("error: %s\n", playerc_error_str());
		exit(0);
	}
	// Change the server's data delivery mode.
	if (playerc_client_datamode(client, PLAYERC_DATAMODE_PUSH_NEW) != 0)
	{
		printf("%s", playerc_error_str());
		exit(0);
	}

	// Get the available devices.
	if (playerc_client_get_devlist(client) != 0)
	{
		printf("%s", playerc_error_str());
		exit(0);
	}

    /* body */
    count = playerc_client_peek(client, 50);
    if (count < 0)
    {
      printf("%s", playerc_error_str());
	  return phFAIL;
    }
    if (count > 0)
    {
	playerc_client_read(client);
          :
          :
    }

Responsibility for this project.

  • Build up sliding system. -- Amit Parkash ,Kyewook Lee
  • Player API¡¦s and Human face database.-- Kyewook Lee
  • Window show using Open GL.-- Amit Parkash

http://www.cs.uml.edu/~lkyewook/Humanlaser.ppt

Edit - History - Print - Recent Changes - Search
Page last modified on December 13, 2006, at 09:17 PM