Signup/Sign In

Answers

All questions must be answered. Here are the Answers given by this user in the Forum.

It can be easily done by:
***
set.seed(101) # Set Seed so that same sample can be reproduced in future also
# Now Selecting 75% of data as sample from total 'n' rows of the data
sample <- sample.int(n = nrow(data), size = floor(.75*nrow(data)), replace = F)
train <- data[sample, ]
test <- data[-sample, ]***

By using caTools package:

***require(caTools)
set.seed(101)
sample = sample.split(data$anycolumn, SplitRatio = .75)
train = subset(data, sample == TRUE)
test = subset(data, sample == FALSE)***
4 years ago
In my research, I can find the result getting on following speed.
***
select * from table where condition=value
(1 total, Query took 0.0052 sec)

select exists(select * from table where condition=value)
(1 total, Query took 0.0008 sec)

select count(*) from table where condition=value limit 1)
(1 total, Query took 0.0007 sec)

select exists(select * from table where condition=value limit 1)
(1 total, Query took 0.0006 sec) ***
4 years ago
expect there is the possibility for subtle differences in their execution. I checked the execution plans for two functionally equivalent queries along these lines in Oracle 10g:

***core> select sta from zip group by sta;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 58 | 174 | 44 (19)| 00:00:01 |
| 1 | HASH GROUP BY | | 58 | 174 | 44 (19)| 00:00:01 |
| 2 | TABLE ACCESS FULL| ZIP | 42303 | 123K| 38 (6)| 00:00:01 |
---------------------------------------------------------------------------

core> select distinct sta from zip;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 58 | 174 | 44 (19)| 00:00:01 |
| 1 | HASH UNIQUE | | 58 | 174 | 44 (19)| 00:00:01 |
| 2 | TABLE ACCESS FULL| ZIP | 42303 | 123K| 38 (6)| 00:00:01 |
---------------------------------------------------------------------------
***
The middle operation is slightly different: "HASH GROUP BY" vs. "HASH UNIQUE", but the estimated costs etc. are identical. I then executed these with tracing on and the actual operation counts were the same for both (except that the second one didn't have to do any physical reads due to caching).

But I think that because the operation names are different, the execution would follow somewhat different code paths and that opens the possibility of more significant differences.

I think you should prefer the DISTINCT syntax for this purpose. It's not just habit, it more clearly indicates the purpose of the query.
4 years ago
***INSERT INTO files (title) VALUES ('whatever');
SELECT * FROM files WHERE id = SCOPE_IDENTITY();***

Is the safest bet since there is a known issue with OUTPUT Clause conflict on tables with triggers. Makes this quite unreliable as even if your table doesn't currently have any triggers - someone adding one down the line will break your application. Time Bomb sort of behaviour.

See msdn article for deeper explanation.
4 years ago
Just in case you don't want to use stored proc, here's a simple query version
***
select *
from information_schema.columns
where table_name = 'aspnet_Membership'
order by ordinal_position***
4 years ago
When using React, you should never mutate the state directly. If an object (or Array, which is an object too) is changed, you should create a new copy.

Others have suggested using Array.prototype.splice(), but that method mutates the Array, so it's better not to use splice() with React.

Easiest to use Array.prototype.filter() to create a new array:

***removePeople(e) {
this.setState({people: this.state.people.filter(function(person) {
return person !== e.target.value
})});
}***
4 years ago
If you are using the tidyverse, you can use

**as_data_frame(table(myvector))**
to get a tibble (i.e. a data frame with some minor variations from the base class)
4 years ago
t is certainly not efficient for this particular purpose, but what I often do in these situations is to insert indicator variables in each data.frame and then merge:

***a1$included_a1 <- TRUE
a2$included_a2 <- TRUE
res <- merge(a1, a2, all=TRUE)***
missing values in included_a1 will note which rows are missing in a1. similarly for a2.

One problem with your solution is that the column orders must match. Another problem is that it is easy to imagine situations where the rows are coded as the same when in fact are different. The advantage of using merge is that you get for free all error checking that is necessary for a good solution.
4 years ago
I found the answer :)

I've set the Theme's editText style to:

***@style/myEditText***
Then I've used the following drawable to set the cursor:



******


*android:textCursorDrawable* is the key here.
4 years ago
React Hooks (16.8+):
***
const Dropdown = ({
options
}) => {
const [selectedOption, setSelectedOption] = useState(options[0].value);
return (
value={selectedOption}
onChange={e => setSelectedOption(e.target.value)}>
{options.map(o => (

))}

);
};***
4 years ago
for Concatenating variables and strings in React with map , for exmple :

***{listOfCategories.map((Categories, key) => { return (
); })}***
4 years ago
SOAP is an ill-suited technology for use on Android (or mobile devices in general) because of the processing/parsing overhead that's required. A REST services is a lighter weight solution and that's what I would suggest. Android comes with a SAX parser, and it's fairly trivial to use. If you are absolutely required to handle/parse SOAP on a mobile device then I feel sorry for you, the best advice I can offer is just not to use SOAP.
4 years ago