376Variable length of accuracy of float in NSString
float f = 1.23456;
NSLog(@"%.2f", f);
NSLog(@"%.0f", f);
NSLog(@"%.0f", f);
1.23 1 1.23456
float f = 1.23456;
NSLog(@"%.2f", f);
NSLog(@"%.0f", f);
NSLog(@"%.0f", f);
1.23 1 1.23456
[NSTimer scheduledTimerWithTimeInterval:1.0
target:self selector: @selector(methodName:)
userInfo:nil repeats:NO];
Alternative to Part I [35], has the possibility of repetetive calls.
Not sure if anyone except me will need this information again.
Situation: A legacy RubyCocoa Application that runs fine in OSX 10.5, but refuses to compile in 10.6. RubyCocoa is working, only access to constants seems to be problematic.
So far the following workarounds:
OSX::KCGScreenSaverWindowLevel ➯➤➲ OSX::NSScreenSaverWindowLevel
Bug Report and Workaround about inproper mapped constants in Bridgesupport on Snow Leopard http://lists.macosforge.org/pipermail/macruby-devel/2009-October.txt
Notes Structs in Ruby
svn co http://svn.macosforge.org/repository/ruby/MacRuby/trunk/misc/xcode-templates/ ruby-templates
svn co http://svn.red-bean.com/pyobjc/trunk/pyobjc/pyobjc-xcode/ python templates
Info from here: http://developer.apple.com/mac/library/releasenotes/DeveloperTools/RN-Xcode/index.html
Usually common problems already have simple solution. Like that one:
Problem You subclassed UIView, you want to do some custom drawing in drawRect, but no matter what you do or where you draw, the background of the view remains black.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextFillEllipseInRect(context, rect);
}
Solution In the ViewController, which call the drawing class, add
myDrawingClass.opaque = NO;
self.opaque = NO;
And not like that
Ok, a bit late to the party. Apple officially approved the use of UIGetScreenImage().
After carefully considering the issue, Apple is now allowing applications to use the function UIGetScreenImage() to programmatically capture the current screen contents. The function prototype is as follows:CGImageRef UIGetScreenImage(void);
https://devforums.apple.com/message/149553
How to caputure a view, ideally the live input of the camera? Unfortunaly there's no clean and clear interface for that. Only the undocumented UIGetScreenCapture() call:
http://www.iphonedevsdk.com/forum/iphone-sdk-development/11219-screenshots-we-allowed-use-uigetscreenimage.html
http://stackoverflow.com/questions/1531815/takepicture-vs-uigetscreenimage
http://svn.saurik.com/repos/menes/trunk/iphonevnc/iPhoneVNC.mm
http://blogs.oreilly.com/iphone/2008/10/creating-a-full-screen-camera.html
But since UIGetScreenCapture() is an undocumented call (in 3.1), here are official ways to get the content of a view.
UIGraphicsGetImageFromCurrentImageContext();
The questions is rather simple: How to manipulate single pixels of an UIImage? The answer is rather long; but includes a joyful trip into Quartz 2D Graphic land...
// load image, convert to CGImageRef
UIImage *c = [UIImage imageNamed:@"c.png"];
CGImageRef cRef = CGImageRetain(c.CGImage);
// png alpha to mask
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef));
// image raw data
//NSData* pixelDataRep = UIImagePNGRepresentation(c);
// compressed png data
//NSLog(@"pixelData %i", [pixelData length]);
//NSLog(@"pixelDataRep %i", [pixelDataRep length]);
//NSLog(@"pixelDataRep equal to pixelData: %@", [pixelData isEqualToData:pixelDataRep] ? @"YES" : @"NO");
//UIImage* newImage = [UIImage imageWithData:pixelData];
//[newImage drawInRect:CGRectMake(10, 340, 65, 65)];
//NSLog(@"pixelData %@", pixelData);
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];
// return pointer to data
// step through char data
for(int i = 0; i < [pixelData length]; i += 4) {
// change accordingly
pixelBytes[i] = pixelBytes[i];
pixelBytes[i+1] = pixelBytes[i+1];
pixelBytes[i+2] = pixelBytes[i+2];
pixelBytes[i+3] = 255;
}
//1ms in Simulator , 5ms on iPhone 3GS , 65x65 pixel
// copy bytes in new NSData
NSData* newPixelData = [NSData dataWithBytes:pixelBytes length:[pixelData length]];
//NSLog(@"newPixelData %@", newPixelData);
//NSLog(@"newPixelData: %@", newPixelData ? @"ok" : @"nil");
//NSLog(@"newPixelData equal to pixelData: %@", [pixelData isEqualToData:newPixelData] ? @"YES" : @"NO");
// cast NSData as CFDataRef
CFDataRef imgData = (CFDataRef)pixelData;
//NSLog(@"CFDataGetLength %i", CFDataGetLength(imgData) );
// Make a data provider from CFData
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData);
// testing... create data provider from file.... works
//NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"c.png"];
//CGDataProviderRef imgDataProvider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
// does not work like that
// new image needs to get PNG properties
//CGImageRef throughCGImage = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
// get PNG properties from cRef
size_t width = CGImageGetWidth(cRef);
size_t height = CGImageGetHeight(cRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(cRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(cRef);
size_t bytesPerRow = CGImageGetBytesPerRow(cRef);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cRef);
CGBitmapInfo info = CGImageGetBitmapInfo(cRef);
CGFloat *decode = NULL;
BOOL shouldInteroplate = NO;
CGColorRenderingIntent intent = CGImageGetRenderingIntent(cRef);
// cRef PNG properties + imgDataProvider's data
CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent);
CGDataProviderRelease(imgDataProvider);
//NSLog(@"c %i, throughCGImage: %i", CGImageGetHeight(cRef), CGImageGetHeight(throughCGImage) );
CGImageRelease(throughCGImage);
// make UIImage with CGImage
UIImage* newImage = [UIImage imageWithCGImage:throughCGImage];
//NSLog(@"newImage: %@", newImage);
// draw UIImage
[newImage drawInRect:CGRectMake(10, 340, 65, 65)];
References:
NSData* pixelData = (NSData*)
CGDataProviderCopyData(CGImageGetDataProvider(c.CGImage));
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];
for(int i = 0; i < [pixelData length]; i += 4) {
NSLog(@"pixelBytes[i] R:%i G:%i B:%i A:%i ",
(int)pixelBytes[i],
(int)pixelBytes[i+1],
(int)pixelBytes[i+2],
(int)pixelBytes[i+3]);
/*
pixelBytes[i] = pixelBytes[i+3];
pixelBytes[i+1] = pixelBytes[i+3];
pixelBytes[i+2] = pixelBytes[i+3];
pixelBytes[i+3] = 0;
*/
}
NSData* newPixelData = [NSData dataWithBytes:pixelBytes length:[pixelData length]];
UIImage* newImage = [UIImage imageWithData:newPixelData];
In Cocoa NSImage has a lockFocus method, that allows to draw images offscreen and combine them into one.
[img lockFocus];
//...
[img unlockFocus];
On the iPhone, UIImage lacks the lockFocus methods, instead the following:
// Create new offscreen context with desired size
UIGraphicsBeginImageContext(CGSizeMake(64.0f, 64.0f));
// draw img at 0,0 in the context
[img drawAtPoint:CGPointZero];
// draw another at 0,0 in the context, maybe with an alpha value
[another drawAtPoint:CGPointZero];
// ... and other operations
// assign context to UIImage
UIImage *outputImg = UIGraphicsGetImageFromCurrentImageContext();
// end context
UIGraphicsEndImageContext();
launchd can be used to montor files and folder and execute certain actions when these files or folders change. Especially useful when an App crashes and writes a log to the CrashReporter. Tutorial, Lingon Helper App