Memory Game inspired in Wordle - Android Apps and Games

Hello everyone,
I just published a simple game but I think it has been very good. At least my kids love it.https://play.google.com/store/apps/details?id=com.ham.game.memo
I have mixed two concepts: the memory game of matching classic images with the wordle, that is, a daily game board and the possibility of sharing the result on Social Networks,
Download and comments are appreciated.
For anyone that could be interested this is the source code how i convert the drawable tiles to a image like wordle style:
public static Bitmap getImageMemo(Context context) {
// base
Drawable blank = null;
if(android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.LOLLIPOP){
if(MemoColors.dark)
blank = App.context().getResources().getDrawable( R.drawable.keyback, App.context().getTheme());
else
blank = App.context().getResources().getDrawable( R.drawable.keyback_light, App.context().getTheme());
} else {
blank = ContextCompat.getDrawable(App.context(), R.drawable.keyback_light);
}
// create image
int margin = 2;
int piece = 48;
Bitmap image = Bitmap.createBitmap((int) piece * 5 + margin * 4, piece * 6 + margin * 5, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(image);
int pTop = 0, pLeft = 0;
for(int row=1; row < 7; row++)
{
for(int col=1; col < 6; col++)
{
Drawable drawable = blank;
if(getActualWord(row, col) < 0)
drawable = MemoIcons.getIconByIndex(MemoIcons.getFamily("robots"), getIconToday(row, col));
drawable.setBounds(pLeft, pTop, pLeft+piece, pTop+piece);
drawable.draw(canvas);
pLeft += piece+margin;
}
pLeft = 0;
pTop += piece+margin;
}
return image;
}

seems very nicely designed.. will give it a try.. thx!

Related

[Q] Control cursor PC by WP7

I want to control the PC cursor by WP7, so I try to use the ManipulationDelta in WP7 that can help me to calculate the difference between he star tap and the end tap
Code:
public MainPage()
{
InitializeComponent();
this.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(MainPage_ManipulationDelta);
transformG = new TransformGroup();
translation = new TranslateTransform();
transformG.Children.Add(translation);
RenderTransform = transformG; // you see I don't use any transform here because I don't know where I put. If I use the image.RenderTransform of it will move also for the screen of WP if I put this.RenderTransform, So anyone have a solution
SenderSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
}
void MainPage_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
startX = e.ManipulationOrigin.X;
startY = e.ManipulationOrigin.Y;
DeltaX = e.DeltaManipulation.Translation.X;
DeltaY = e.DeltaManipulation.Translation.Y;
translation.X += e.DeltaManipulation.Translation.X;
translation.Y += e.DeltaManipulation.Translation.Y;
EndX = Convert.ToDouble(translation.X);
EndY = Convert.ToDouble(translation.Y);
}
I am juste want to send DeltaX and DeltaY to the server to calculate them to the mouse position in the screen, So I write this code
Code:
void StartSending()
{
while (!stop)
try
{
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
byte[] buffer = Encoding.UTF8.GetBytes(DeltaX.ToString() + "/" + DeltaY.ToString());
socketEventArg.SetBuffer(buffer, 0, buffer.Length);
SenderSocket.SendToAsync(socketEventArg);
}
catch (Exception) { }
}
I concatenate them in 1 buffer with separate by "/" and in server I use this code to separate
Code:
void Receive(byte[] buffer)
{
string chaine = "";
if (SenderSocket != null)
{
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
{
if (e.SocketError == SocketError.Success)
{
chaine = Encoding.UTF8.GetString(e.Buffer, e.Offset, e.BytesTransferred);
chaine.Trim('\0');
string[] pos = chaine.Split('/');
for (int i = 0; i < pos.Length; i++)
{
pX = Convert.ToInt32(pos[0]);
pY = Convert.ToInt32(pos[1]);
this.Cursor = new Cursor(Cursor.Current.Handle);
Cursor.Position = new Point(Cursor.Position.X + pX, Cursor.Position.Y + pY);
}
}
else
{
}
});
SenderSocket.ReceiveFromAsync(socketEventArg);
}
Just I want to control the cursor, if you have any other methods so plz help me and I am really grateful
Didn't you already have a thread about this? Please re-use existing threads instead of starting new ones. Even if it wasn't you, *somebody* was working on this problem already, and very recently. Always use the Search button before starting a thread.
So... what are you looking for from us? Does your current code work? If not, in what way does it fail? Without knowing what your question is, we can't provide answers.
If you want some advice, though...
Sending as strings is very inefficient on both ends; it would be better to use arrays (which you could convert directly to byte arrays and back again).
You're sending as TCP, which is OK but probably not optimal. For this kind of data, UDP is quite possibly better. If nothing else, it provides clearly delineated packets indicating each update.

[Q] Audio Level Meter

Hello
I'm new to programming and I'm trying to make a java application that will "hear" (not record necessarily) the sound and display how loud is.I'm thinking of converting the sound recordings to numbers,so I can see the difference on the sound levels.I got this code and I added the "getLevel()" method,which returns the amplitude of the current recording,but it's returning -1 everytime.I guess I'm not using it properly. Any ideas how I must call this method?I have to deliver my project in a week,so any help will be much appreciated!
Code:
public class Capture extends JFrame {
protected boolean running;
ByteArrayOutputStream out;
public Capture() {
super("Capture Sound Demo");
setDefaultCloseOperation(EXIT_ON_CLOSE);
Container content = getContentPane();
final JButton capture = new JButton("Capture");
final JButton stop = new JButton("Stop");
final JButton play = new JButton("Play");
capture.setEnabled(true);
stop.setEnabled(false);
play.setEnabled(false);
ActionListener captureListener =
new ActionListener() {
public void actionPerformed(ActionEvent e) {
capture.setEnabled(false);
stop.setEnabled(true);
play.setEnabled(false);
captureAudio();
}
};
capture.addActionListener(captureListener);
content.add(capture, BorderLayout.NORTH);
ActionListener stopListener =
new ActionListener() {
public void actionPerformed(ActionEvent e) {
capture.setEnabled(true);
stop.setEnabled(false);
play.setEnabled(true);
running = false;
}
};
stop.addActionListener(stopListener);
content.add(stop, BorderLayout.CENTER);
ActionListener playListener =
new ActionListener() {
public void actionPerformed(ActionEvent e) {
playAudio();
}
};
play.addActionListener(playListener);
content.add(play, BorderLayout.SOUTH);
}
private void captureAudio() {
try {
final AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(
TargetDataLine.class, format);
final TargetDataLine line = (TargetDataLine)
AudioSystem.getLine(info);
line.open(format);
line.start();
Runnable runner = new Runnable() {
int bufferSize = (int)format.getSampleRate()
* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
public void run() {
out = new ByteArrayOutputStream();
running = true;
try {
while (running) {
int count =
line.read(buffer, 0, buffer.length);
if (count > 0) {
out.write(buffer, 0, count);
System.out.println(line.getLevel()); // |-this is what i added-|
}
}
out.close();
} catch (IOException e) {
System.err.println("I/O problems: " + e);
System.exit(-1);
}
}
};
Thread captureThread = new Thread(runner);
captureThread.start();
} catch (LineUnavailableException e) {
System.err.println("Line unavailable: " + e);
System.exit(-2);
}
}
private void playAudio() {
try {
byte audio[] = out.toByteArray();
InputStream input =
new ByteArrayInputStream(audio);
final AudioFormat format = getFormat();
final AudioInputStream ais =
new AudioInputStream(input, format,
audio.length / format.getFrameSize());
DataLine.Info info = new DataLine.Info(
SourceDataLine.class, format);
final SourceDataLine line = (SourceDataLine)
AudioSystem.getLine(info);
line.open(format);
line.start();
Runnable runner = new Runnable() {
int bufferSize = (int) format.getSampleRate()
* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
public void run() {
try {
int count;
while ((count = ais.read(
buffer, 0, buffer.length)) != -1) {
if (count > 0) {
line.write(buffer, 0, count);
}
}
line.drain();
line.close();
} catch (IOException e) {
System.err.println("I/O problems: " + e);
System.exit(-3);
}
}
};
Thread playThread = new Thread(runner);
playThread.start();
} catch (LineUnavailableException e) {
System.err.println("Line unavailable: " + e);
System.exit(-4);
}
}
private AudioFormat getFormat() {
float sampleRate = 8000;
int sampleSizeInBits = 8;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate,
sampleSizeInBits, channels, signed, bigEndian);
}
@SuppressWarnings("deprecation")
public static void main(String args[]) {
JFrame frame = new Capture();
frame.pack();
frame.show();
}
}
Ok,I managed to make it capture audio and print on a xls file the timestamp and the value of the current sample,but there is a problem : even I've put some spaces between the time and the value and it seems that they are in different columns,they are actualy on the same column of the xls,it's just expanded and covers the next column (I can put a print screen if you don't understand).How can I make it print the data of time and amplitude in two different columns?Here's my code of the class which creates the file and saves the data on xls :
Code:
package soundRecording;
import java.io.File;
import java.util.Formatter;
public class Save {
static Formatter y;
public static void createFile() {
Date thedate = new Date();
final String folder = thedate.curDate();
final String fileName = thedate.curTime();
try {
String name = "Time_"+fileName+".csv";
y = new Formatter(name);
File nof = new File(name);
nof.createNewFile();
System.out.println("A new file was created.");
}
catch(Exception e) {
System.out.println("There was an error.");
}
}
public void addValues(byte audio) {
Date d = new Date();
y.format("%s " + " %s%n",d.curTime(), audio);
}
public void closeFile() {
y.close();
}
}

[App] [Tutorial] Learn to make a Compass Application

Hello,
I create this post to present you a video tutorial showing you how to create a compass application for Android steps by steps. This tutorial lets beginners to start with sensors on Android and also to discover how to get GPS Location with default Android services.
Tutorial is here :
A demo application is also available on Google Play Store : https://play.google.com/store/apps/details?id=com.ssaurel.tinycompass
Don't hesitate to tell me if you want more details about the source code.
Sylvain
Compass View source code
Hello,
To start with source code, this is CompassView source code :
Code:
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Matrix;
import android.graphics.Paint;
import android.util.AttributeSet;
import android.view.View;
import com.ssaurel.tinycompass.R;
/**
* Compass view.
*
* @author Sylvain Saurel - [email protected]
*
*/
public class CompassView extends View {
private static final Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
private int width = 0;
private int height = 0;
private Matrix matrix;
private Bitmap bitmap;
private float bearing;
public CompassView(Context context) {
super(context);
initialize();
}
public CompassView(Context context, AttributeSet attr) {
super(context, attr);
initialize();
}
private void initialize() {
matrix = new Matrix();
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.compass_icon);
}
public void setBearing(float bearing) {
this.bearing = bearing;
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
width = MeasureSpec.getSize(widthMeasureSpec);
height = MeasureSpec.getSize(heightMeasureSpec);
setMeasuredDimension(width, height);
}
@Override
protected void onDraw(Canvas canvas) {
int bitmapWidth = bitmap.getWidth();
int bitmapHeight = bitmap.getHeight();
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
if (bitmap.getWidth() > canvasWidth || bitmap.getHeight() > canvasHeight) {
// resize to fit canvas
bitmap = Bitmap.createScaledBitmap(bitmap, (int) (bitmapWidth * .85), (int) (bitmapHeight * .85), true);
}
// calculate center position
int bitmapX = bitmap.getWidth() / 2;
int bitmapY = bitmap.getHeight() / 2;
int parentX = width / 2;
int parentY = height / 2;
int centerX = parentX - bitmapX;
int centerY = parentY - bitmapY;
// rotation angle
int rotation = (int) (360 - bearing);
// transformation matrix
matrix.reset();
// rotate on center to put on North
matrix.setRotate(rotation, bitmapX, bitmapY);
// translate bitmap to center
matrix.postTranslate(centerX, centerY);
// draw bitmap
canvas.drawBitmap(bitmap, matrix, paint);
}
}
Don't hesitate if you have comments .
Sylvain
Hello,
A blog article to complete the video tutorial is also available now : http://www.ssaurel.com/blog/learn-how-to-make-a-compass-application-for-android/
Sylvain

Image not saving new dimensions after resize using xamarin c#

I am only posting here because it wont allow me to post in the other forums.
i am taking a photo with my app then rotating if needed then resizing. after that i am saving the new rotated and scaled down pic to the sd card. the photo does rotate and the byte size changes but when i view the new photo it still has the same dimensions as the orginal photo.
my resizing and saving calls are
Code:
//resize for preview
using (var bitmap = _file.Path.LoadAndResizeBitmap(300, 225))
{
//resize for saving
Bitmap resized = _file.Path.LoadAndResizeBitmap(800, 600);
BitmapHelpers.saveBitmap(resized,_user_id,_dateStr);
//shows preview resize
photo.SetImageBitmap(ConvertToBitmap(bitmap));
}
public static Bitmap LoadAndResizeBitmap(this string fileName, int width, int height)
{
//First we get the diamensions of the file on disk
BitmapFactory.Options options = new BitmapFactory.Options { InJustDecodeBounds = true };
BitmapFactory.DecodeFile(fileName, options);
//Next we calculate the ratio that we need to resize the image by
//in order to fit the requested dimensions.
int outHeight = options.OutHeight;
int outWidth = options.OutWidth;
int inSampleSize = 1;
if (outHeight > height || outWidth > width)
{
inSampleSize = outWidth > outHeight
? outHeight / height : outWidth / width;
}
//Now we will load the image
options.InSampleSize = inSampleSize;
options.InJustDecodeBounds = false;
Bitmap resizedBitmap = BitmapFactory.DecodeFile(fileName, options);
// Images are being saved in landscape, so rotate them back to portrait if they were taken in portrait
Matrix mtx = new Matrix();
Android.Media.ExifInterface exif = new Android.Media.ExifInterface(fileName);
string orientation = exif.GetAttribute(Android.Media.ExifInterface.TagOrientation);
switch (orientation)
{
case "6": // portrait
mtx.PreRotate(90);
resizedBitmap = Bitmap.CreateBitmap(resizedBitmap, 0, 0, resizedBitmap.Width, resizedBitmap.Height, mtx, false);
mtx.Dispose();
mtx = null;
break;
case "1": // landscape
//mtx.PreRotate(-90);
//resizedBitmap = Bitmap.CreateBitmap(resizedBitmap, 0, 0, resizedBitmap.Width, resizedBitmap.Height, mtx, false);
//mtx.Dispose();
//mtx = null;
break;
default:
mtx.PreRotate(90);
resizedBitmap = Bitmap.CreateBitmap(resizedBitmap, 0, 0, resizedBitmap.Width, resizedBitmap.Height, mtx, false);
mtx.Dispose();
mtx = null;
break;
}
return resizedBitmap;
}
public static void saveBitmap(Bitmap bitmap, int userId, String dateStr)
{
var sdCardPath = Android.OS.Environment.ExternalStorageDirectory.AbsolutePath;
String fileName = "bb_" + userId + "_" + dateStr + "_new.jpg";
var filePath = System.IO.Path.Combine(sdCardPath, fileName);
var stream = new FileStream(filePath, FileMode.Create);
bitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, stream);
stream.Close();
}
when it does the first _file.Path.LoadAndResizeBitmap(300, 225) the inSampleSize is 3 and returns new height,width of 640,360
when it does _file.Path.LoadAndResizeBitmap(800, 600) the inSampleSize is 1 and returns original values of 1280,720

Google Vision: Drawing mask on Face with animations

I am using google vision library for face detection. Face detection is perfect and I get all the info like vertices, angles like eulerY, eulerZ.
I want to draw mask on face, drawing is ok but the face mask is not following the face position as it should, the position is not correct. Here is my edited code to draw face mask on googly eyes project.
Here is my source code:
package com.google.android.gms.samples.vision.face.googlyeyes;
import android.content.Context;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.PointF;
import android.graphics.Rect;
import android.graphics.drawable.Drawable;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.View;
import android.widget.FrameLayout;
import android.widget.ImageView;
import android.widget.LinearLayout;
import com.google.android.gms.samples.vision.face.googlyeyes.ui.camera.GraphicOverlay;
import com.google.android.gms.vision.face.Face;
import java.util.HashMap;
/**
* Graphics class for rendering Googly Eyes on a graphic overlay given the current eye positions.
*/
class GooglyEyesGraphic extends GraphicOverlay.Graphic {
private Paint mEyeWhitesPaint;
private Paint mEyeIrisPaint;
private Paint mEyeOutlinePaint;
private Paint mEyeLidPaint;
Paint mBoxPaint;
Context mContext;
private static final float BOX_STROKE_WIDTH = 20.0 f;
FrameLayout frameLayout;
ImageView imageView;
// Bitmap bmpOriginal;
//==============================================================================================
// Methods
//==============================================================================================
GooglyEyesGraphic(GraphicOverlay overlay, Context mContext) {
super(overlay);
this.mContext = mContext;
mEyeWhitesPaint = new Paint();
mEyeWhitesPaint.setColor(Color.WHITE);
mEyeWhitesPaint.setStyle(Paint.Style.FILL);
mEyeLidPaint = new Paint();
mEyeLidPaint.setColor(Color.YELLOW);
mEyeLidPaint.setStyle(Paint.Style.FILL);
mEyeIrisPaint = new Paint();
mEyeIrisPaint.setColor(Color.BLACK);
mEyeIrisPaint.setStyle(Paint.Style.FILL);
mEyeOutlinePaint = new Paint();
mEyeOutlinePaint.setColor(Color.BLACK);
mEyeOutlinePaint.setStyle(Paint.Style.STROKE);
mEyeOutlinePaint.setStrokeWidth(5);
mBoxPaint = new Paint();
mBoxPaint.setColor(Color.MAGENTA);
mBoxPaint.setStyle(Paint.Style.STROKE);
mBoxPaint.setStrokeWidth(BOX_STROKE_WIDTH);
mBoxPaint.setAlpha(40);
LayoutInflater li = (LayoutInflater) mContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View view = li.inflate(R.layout.mask_layout, null);
imageView = (ImageView) view.findViewById(R.id.flMaskIV);
frameLayout = (FrameLayout) view.findViewById(R.id.frameLayout);
}
private volatile Face mFace;
/**
* Updates the eye positions and state from the detection of the most recent frame. Invalidates
* the relevant portions of the overlay to trigger a redraw.
*/
void updateEyes(PointF leftPosition, boolean leftOpen,
PointF rightPosition, boolean rightOpen, Face mFace) {
if (facesList.containsKey(mFace.getId())) {
PointF pointF1 = facesList.get(mFace.getId()).getPosition();
PointF pointF2 = mFace.getPosition();
double x = Math.sqrt(Math.pow(pointF2.x - pointF1.x, 2) - Math.pow(pointF2.y - pointF1.y, 2));
if (x < 0)
x = (-1 * x);
if (x < 10)
return;
Log.e("face Called", "FaceCalled");
}
this.mFace = mFace;
facesList.put(mFace.getId(), mFace);
postInvalidate();
}
public HashMap < Integer, Face > facesList = new HashMap < > ();
/**
* Draws the current eye state to the supplied canvas. This will draw the eyes at the last
* reported position from the tracker, and the iris positions according to the physics
* simulations for each iris given motion and other forces.
*/
@override
public void draw(Canvas canvas) {
if (mFace == null)
return;
// if (facesList.containsKey(mFace.getId())) {
// PointF pointF1 = facesList.get(mFace.getId()).getPosition();
// PointF pointF2 = mFace.getPosition();
//
// double x = Math.sqrt(Math.pow(pointF2.x - pointF1.x, 2) - Math.pow(pointF2.y - pointF1.y, 2));
// if (x < 0)
// x = (-1 * x);
// if (x < 10)
// return;
// Log.e("face Called", "FaceCalled");
//
// }
//
// facesList.put(mFace.getId(), mFace);
if (this.canvas == null)
this.canvas = canvas;
applyMask();
}
Drawable drawable;
Canvas canvas;
private void applyMask() {
if (canvas == null)
return;
// Log.e("mFace.getEulerY()", "mFace.getEulerY()=> " + mFace.getEulerY());
if (GooglyEyesActivity.maskImgView != null) {
GooglyEyesActivity.maskImgView.setVisibility(View.GONE);
GooglyEyesActivity.maskImgView.setImageResource(GooglyEyesActivity.currEmoticonID);
}
float x = translateX(mFace.getPosition().x + mFace.getWidth() / 2);
float y = translateY(mFace.getPosition().y + mFace.getHeight() / 2);
// Draws a bounding box around the face.
float xOffset = scaleX(mFace.getWidth() / 2.0 f);
float yOffset = scaleY(mFace.getHeight() / 2.0 f);
float left = x - xOffset - 50;
float top = y - (yOffset) - 50;
float right = x + xOffset + 50;
float bottom = y + (yOffset) + 50;
// canvas.drawRect((int) left, (int) top, (int) right, (int) bottom, mBoxPaint);
drawable = GooglyEyesActivity.maskImgView.getDrawable();
///////////////////
canvas.save();
canvas.translate(left, top);
// frameLayout.setX(left);
// frameLayout.setY(top);
Rect rect = new Rect((int) left, (int) top, (int) right, (int) bottom);
frameLayout.measure(rect.width(), rect.height());
frameLayout.setLayoutParams(new LinearLayout.LayoutParams(rect.width(), rect.height()));
frameLayout.layout(0, 0, (int) right, (int) bottom);
frameLayout.setClipBounds(rect);
imageView.setLayoutParams(new FrameLayout.LayoutParams(rect.width(), rect.height()));
imageView.setRotationY(mFace.getEulerY());
imageView.setRotation(mFace.getEulerZ());
imageView.setImageDrawable(drawable);
frameLayout.draw(canvas);
canvas.restore();
}
}
Also i need to add animations so i tried using dlib library to get landmarks points and draw it using opengl but in opengl i dont have any function to populate the vertice array i am getting from dlib. As the dlib landmarks are in points but the array there is not in such a way. Any help will be appreciated for both scenarios.
Thank you in advance.
Thanks.
aijaz070110 said:
I am using google vision library for face detection. Face detection is perfect and I get all the info like vertices, angles like eulerY, eulerZ.
I want to draw mask on face, drawing is ok but the face mask is not following the face position as it should, the position is not correct. Here is my edited code to draw face mask on googly eyes project.
Here is my source code:
[...]
Also i need to add animations so i tried using dlib library to get landmarks points and draw it using opengl but in opengl i dont have any function to populate the vertice array i am getting from dlib. As the dlib landmarks are in points but the array there is not in such a way. Any help will be appreciated for both scenarios.
Thank you in advance.
Thanks.
Click to expand...
Click to collapse
Do you have any progress on this?

Categories

Resources